Hacker News new | past | comments | ask | show | jobs | submit login

If this is intended as a defense of Math culture, I think it falls short. He does a good job of pointing out the corresponding problems in the state of programming:

Indeed, the opposite problems are familiar to a beginning programmer when they aren’t in a group of active programmers. Why is it that people give up or don’t enjoy programming? Is it because they have a hard time getting honest help from rudely abrupt moderators on help websites like stackoverflow? Is it because often when one wants to learn the basics, they are overloaded with the entirety of the documentation and the overwhelming resources of the internet and all its inhabitants? Is it because compiler errors are nonsensically exact, but very rarely helpful? Is it because when you learn it alone, you are bombarded with contradicting messages about what you should be doing and why (and often for the wrong reasons)?

The difference is that the CS community recognizes that these are problems; every single thing he's complaining about are open problems being taken seriously and attacked from multiple directions, and there is hope for serious improvement in the coming decades. Anyone who thinks rude snobs, bad documentation, or useless compiler errors are a beneficial is rightly ridiculed as a smug weenie or accused of having an ulterior motive.

By contrast, mathematicians are defensive and complacent about their arcane, non-inclusive notation and communication: "At this point you might see all of this as my complaining, but in truth I’m saying this notational flexibility and ambiguity is a benefit." Look at the litany of problems he just presented. Consider the fact that mathematics is not the only complicated subject that requires complicated, flexible, and rigorous notation. It just isn't credible that the shitty state of mathematical notation is either necessary or unavoidable. The occasional counterexample, where someone with a good understanding of a subject presents it in full rigor without resorting to the usual obfuscation, is a hint of what could be.

If your publications cannot be read without an expert interpreter, they are defective. Hypertext has been around for decades, if you're going to invent your own ad-hoc (or even standardized!) syntax to solve a problem your readers have a right that you document the meaning of your notation.




This isn't entirely true. Numerous mathematicians have publicly bemoaned the difficulty of communication between mathematicians in different specialties. So it's definitely not something that is ignored.

The question is what to do about it. We're not just talking about confusion arising from different notation between mathematical specialties (resolving that would be as easy as defining your notation in an appendix), but different, equally valid ways of mathematical thinking.

In some ways, it may be better to think of different mathematical specialties as different programming languages. Proficiency in one will help you, but won't guarantee that you can interpret another. Except that in mathematics, the differences are more extreme. If you have two Turing-complete programming languages, then you have two different tools that can solve the same class of problems. But different fields in mathematics deal with entirely different mathematical objects which require a conceptual instead of notational leap on the part of the reader. It's not simply a matter of figuring out how to write for loops or manipulate strings in the new language. You actually have entirely different ideas in each, and trying to impose some common notational standard among them is fraught with problems.


I agree with you and the OP that creating a standard notation for all math is problematic. And you're right that there are conceptual difficulties behind the math that are not going to be automatically resolved by clearer communication. But a lot of the difficulties are accidents of culture.

To continue the programming language/mathematical notation allegory, you can use whatever notation you want to describe your program in a Lisp with macros. But you have to actually define what your notation means. Neglecting to provide the definition of the macros would be the equivalent of a logical argument that leaves off its premises or the state of math publications today from the perspective of the poor sap that has to read them (and the mind of the person who wrote it).


Fair enough. But even though there's no hope of coming up with a single programming language for everything, we do have quite good tools for popular areas. It seems like being able to mechanize the error-checking of proofs in certain of the more useful and popular subfields of math might be just what's needed by non-mathematicians?


Well, we do have proof assistants. I think people are starting to use those in more diverse fields of math now--I remember seeing an article about an algebraic proof in Coq, but I don't remember any of the details.


I second this.

In programming languages, different paradigms differs as much as how different field in math. So a more concrete analogy would be one jumping from different paradigms of programming languages. Like someone using a imperative language like Java for their entire life and must learn to read a Haskell program.


I certainly agree that communication can be improved (and this is one of the main reasons I spend so much of my time improving my technical communication skills via blogging). But I stand by my defense of the need for flexibility. It's extremely hard to explain why without delving into technical details, but there are some times when abuses of notation are honestly much more helpful than they are hurtful. Usually they exist to decrease the amount of clutter in syntax, and they are often stated in words during presentation (and the abuses mostly go assumed only in the higher-end research papers, in fields where the fundamental book every practitioner is obligated to read spells out all of those assumptions in full).

Of course, the more of these you add on the harder it becomes to explain it to a newcomer. So I'm not trying to defend the math culture as being right in all ways. I'm just trying to explain why things are the way they are, and rationally see that there is a good reason to do so (or else why would it have been done?). When you learn the stuff in real life, each of these "abuses" are added on one small step at a time (maybe I learn about one new abuse of syntax every month or two) and so absorbing it all simultaneously is not such a big deal.


Changing the notation is fine. Changing it without explaining what the new notation means is where the problem lies.

Maybe it would help if we put papers online with the ability for anyone to annotate and add explanations, or links to other papers for context.


That's a very good idea. As of now I don't even know if there's a structured way to display known errata in a paper.


It's kind of like list comprehensions in python. For loops do just fine, but they're such a pain to write all the time and for some cases we just want a clear one-line way to say "add 1 to everything in this list."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: