Hacker News new | past | comments | ask | show | jobs | submit login

Mathematical notation is more concise, which may take some getting used to. One reason is that it is optimized for handwriting. Handwriting program code would be very tedious, so you can see why mathematical notation is the way it is.

Apart from that, there is no “the code” equivalent. Mathematical notation is for stating mathematical facts or propositions. That’s different from the purpose of the code you would write to implement deep-learning algorithms.




The last part was a big hurdle for me as an early undergrad. I was a fairly strong programmer toward the end of high school, and was trying to think of math as programming. That worked for the fairly algorithmic high school stuff and I got good grades, but it made I was awful at writing proofs. I also went through a phase where I used all the logical notation and rules to manipulate it possible in order to make proofs more algorithmic to me, but that both didn’t work well for me and produced some downright unreadable results.

Mathematical notation is really a shorthand for words, like you’d read text. The equals sign is literally short for equals. The added benefit, as others have pointed out, is that a good notation can sometimes be clearer than words because it makes certain conclusions almost obvious. You’ve done the hard part in finding a notation that captures exactly the idea to be demonstrated in its encoding, and the result is a very clean manipulation of your notation.


This is essentially my problem. I started writing programs at a young age and was introduced (unknowingly) to many more advanced mathematical concepts from that perspective rather than through pure mathematics. What was it that helped break this paradigm for you?


Really trial and error and grinding through proofs. Working through Linear Algebra Done Right was a big a-ha moment for me. Since I was self-studying over the summer (to remedy my poor linear algebra skills), I was very rigorous in making sure I understood every line of the proofs in the text and trying to mimic his style in the exercises.

In hindsight, I think the issue was trying to map everything to programming is a bad idea and I was doing it because programming was the best tool in my tool chest. It was a real “when all you have is a hammer, everything looks like a nail” issue for me.


I wrote a book: https://pimbook.org

You might find it useful for your situation. The PDF is pay-what-you-want if you don't feel like paying for it.


Ah, I think I remember bookmarking this when it was posted before. You really don't have to go very far in computing to find a frontier where most everything in described pure mathematics and so it becomes a substantial barrier for undiversified autodidacts in the field. The math in these areas can often be quite advanced and difficult to approach without the proper background and so I appreciate anyone who has made taken the time to make it less formidable to others.


I would suggest something like https://ocw.mit.edu/courses/6-042j-mathematics-for-computer-... instead of that book.

I appreciate that some may find the book useful, but I personally don't agree with the presentation. There are too many conceptual errors in the book that you need to unlearn to make progress. For example, the book describes R^2 as a "pair" of real numbers. This is very much untrue and that kind of thinking will lead you even further astray.

I say this as someone with a math/cs degree and PhD having taught these topics to hundreds of students.


>For example, the book describes R^2 as a "pair" of real numbers.

I naturally auto-corrected this "(the set of) pairs of real numbers". If that's the case, then I don't see how this differs from the actual definition. What is the conceptual error? Is it the missing 'set of'?


> For example, the book describes R^2 as a "pair" of real numbers.

From page 15:

> The one piece of new notation is the exponent on R^2. This means "pairs" of real numbers.

Your interpretation of this quote is uncharitable at best. Using it to make a blanket assertion about the book is just silly, and quite out of the spirit of mathematics.

In particular, page 19 has an example of the kind of things that my book has that other books don't: a discussion of the soft skills of learning math and the cultural acclimation process:

> Though it sometimes makes me cringe to say it, give the author the benefit of the doubt. When things are ambiguous, pick the option that doesn’t break the math. In this respect, you have to act as both the tester, the compiler, and the bug fixer when you’re reading math. The best default assumption is that the author is far smarter than we are, and if you don’t understand something, it’s likely a user error and not a bug. In the occasional event that the author is wrong, it’s often a simple mistake or typo, to which an experienced reader would say, “The author obviously meant ‘foo’ because otherwise none of this makes sense,” and continue unscathed.

The course you suggested is the sort of "grab bag of topics" course, meant to cram the basics of every topic a CS major might want to know for doing the kind of CS theory research that MIT cares about. If you find math hard, I doubt that will make it much easier, but it could be good to do alongside a book like mine if you find my book too easy.


I don't know that has anything to do with programming.

Arithmatic and writing proofs are very different skills. There is going to be a gap for everyone.


Yeah I know it’s a common challenge. I think it took me a bit longer than some of my peers because I was trying to force it to be like something I knew instead of meeting it on its own terms.

When all you have is a hammer and all that


> Mathematical notation is for stating mathematical facts or propositions.

And as such it is way too often abused. Because the (original, and the most useful) purpose of mathematical notation is to enable calculation, i.e., in a general sense, to make it possible to obtain results by manipulating symbols according certain rules.


I see the steps of a calculation as stating a sequence of mathematical facts, so that’s just an instance of the general definition.


Sure, but the whole point is to avoid the need to do that! Manipulating symbols is the way to automate reasoning, i.e. to get to a result while completely ignoring said "facts." Using the symbols to merely "state the facts" is abuse (of the reader, mostly).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: