Hacker News new | past | comments | ask | show | jobs | submit login
Making math more Lego-like (news.harvard.edu)
186 points by jonbaer on March 3, 2017 | hide | past | favorite | 57 comments




Bob Coecke's "Kindergarten Quantum Mechanics": http://www.cs.ox.ac.uk/bob.coecke/VaxjoProc.pdf



well, there goes my Friday night. Cheers :)


There's a history from the introduction of Tensor networks by Penrose to the 3D tensor networks, the quon language: https://dl.dropboxusercontent.com/u/78485948/PNAS-2017-Biamo...


This gives more of a physical development, and frames the context of the lego-like language in the cross-disciplinary field of quantum information


There is precisely one person who can be productive with Penrose notation, and that's Roger Penrose.

You should see him do it, though - very impressive.


This isn't true. I've personally found proofs in the diagrammatic notation that I wouldn't have found without it.


also similar to work by Louis H. Kauffman in representing QM with knots & braids https://arxiv.org/pdf/0903.1376.pdf


also amplituhedra for computing particle interactions


Why do this rather then define an abstract, human-evaluatable, programming language? Give it some S-expr syntax and make more things implicit and you're golden.

Making strange squigles on a page doesn't help. Making strange greek symbols on the page doesn't help. Writing functions, "unit tests", building on abstractions, and creating "libraries" do help. We all know this because it works well in CS. Why would this not work well in physics?

Ditch the alphabet soup (and yes, even the 5th centry Greek alphabet soupe). Abstract common ideas into "functions". Write human-provable "test-cases". Add documentation. Use logically derived variable names (even if they're 5-10 characters). Formalize the definitions and syntax of this language and it's abilities. You'd be golden.

Phi shouldn't mean 30 things depending on the field you're in.

Am I wrong? Is this not possible? Is there a reason as to why math and logic can't be expressed as an abstract programing language? Is there a reason why it wouldn't be better to have a completely standard way as to how to define all of your algorithms and logical assertions?

I could be wrong but no one has provided information as to why what we do is better. Nor does this look much better (what kmill has linked).


> Am I wrong? Is this not possible?

Yes, you're wrong.

I mean, I agree with you. This is the route we should commit to. Building up mathematical theories in a programming language.

But I think you've reached the right conclusion for horribly confused reasons, and that you severely over-estimate the state of the art in CS. This over-estimation results in you not realizing why something like this could be a valuable tool within the world you describe.

Perhaps I'm wrong, but here's what gives me this impression:

> Write human-provable "test-cases"

Since the article is discussing mathematics, I suppose here by proof you actually really honestly mean proof.

So proof by "I worked through 6 examples and they all turned out OK so far"?

That's not typically what a mathematician means when they say they proved something. Proofs can get horribly complicated in ways that test cases generally don't.

> Writing functions, "unit tests", building on abstractions, and creating "libraries" do help. We all know this because it works well in CS.

Yes, all of these things would be great.

But it's REALLY not that simple.

Formal verification of software systems remains a mostly academic pursuit. There's some promising early work, but almost none of "us" formally verify our software.

It does NOT work well in CS. Not at the moment. And most things we do in CS are conceptually simpler than a lot of mathematics. So exploring alternative interfaces makes a lot of sense.

> Is this not possible? Is there a reason as to why math and logic can't be expressed as an abstract programing language?

They can be. It's horrendously verbose, and it's difficult to do novel work in such a formal setting.

It does not work well in the same sense that it does not work well in CS -- you can do it, if you're willing to spend 10x the time/money/effort for the "good enough" version (in math, paper proofs; in CS, shippable but unverified code).


> I think the problem is that you don't understand what a real result looks like in math

Unfortunetly I've been plauged with the task of reading some math papers to understand some things in the past. I still wake up in a cold sweat some times.

> So proof by "I worked through 6 examples and they all turned out OK so far"? > That's not typically what a mathematician means when they say they proved something...

I'm well aware. That's why I included the quote marks. I mean a logical proof based on the relation of this "function's" suppositions in relation to the rest of the "functions" it is integrated with providing context and mathematical relations to the other parts of the conjecture you're working on.

> Formal verification of software systems remains a mostly academic pursuit. There's some promising early work, but almost none of "us" formally verify our software. > So no, you're wrong. It does NOT work well in CS. Not at the moment.

I mean this is just blatantly wrong and ignoring the face of what I'm saying. I'm implying that 1) if you have a pinical example of computer science you can take a freshman-level CS student, give them a tour of the language, and let them loose on a problem and they'll understand what's going on. Some of the worlds most complex simulation sytems have children in highschool writing games in them. Things like Unity and Unreal engine representing thousands and thousands of manhours to create along with entire libraries of research, industry-funded experiments, and years of expertise. I'm saying that the common practices of programming can yield systems that are simple for anyone to maintain and work on. I can't say the same for mathematics. While I understand the concepts of derivatives, integrals, and calculus if tasked I'd most likely not be able to provide a formula to define the area of a new shape despite understanding the core material far better then most programmers understand the new project they've been flung into. 2) the concepts from computer science and programming can be adapted to suit math's needs and these adaptations can be fluid where novel augmentations of the processes that yield positive results can be further integrated into mathematics. Programming and computer science are quickly evolving fields and have a unique possition in that many people engage in them without formal training from a university because in most cases it's simple unneeded. You can pick up one book and then rely on the common goal of every computer scientist and programmer that came before you: abstraction of the concepts to yield a simple solution to arbitrarily complex problems. Because of this you can jump anywhere and pick up and run with your work.

I'm not saying these ideas are directly applicable with no modification. What I am saying is "these are the ideas that make my field possible" and suggesting that standardizing practices in the Math-heavy fields would not be a bad thing especially if these same standardized practices have been used to great lengths in our field to produce a set of very positive outcomes.


TL;DR: The idea you express -- a programming-like approach to mathematics -- has been around since the dawn of Computer Science. I could even make the case that this idea pre-dates the computer by a couple hundred years.

There are literally hundreds of researchers around the world working toward this vision. But this remains a niche approach, and there are some very real hurdles to wide-spread use. Therefore, exploring different approaches is certainly justifiable.

--

> Unfortunetly I've been plauged with the task of reading some math papers to understand some things in the past. I still wake up in a cold sweat some times.

I'm with you there :-)

Although it's perhaps telling that many CS papers are just as bad, if not worse, even when they come with code! Executable ideas aren't necessarily easier to understand...

>> formal verification is a mostly academic pursuit

> I mean this is just blatantly wrong

It's really not though!

Things have really picked up over the past 10 years, but a vast majority of the software we use -- and a vast majority of the software being created -- does not make significant use of formal proof.

And certainly not the sort of functional correctness verification efforts that would be analogous to formalized mathematics.

> Some of the worlds most complex simulation sytems have children in highschool writing games in them

This is like saying that multivariate calculus is child's play because children can manipulate things in R3. There's a kernel of truth, but it's mostly wildly incorrect.

Most Mathematicians generate proofs, not pretty simulations.

Besides, and more to the point, those kids can do that precisely because those engines/frameworks/languages abstract away a lot of very difficult mathematics that they couldn't hope to understand! Which is exactly what the work described in the article could achieve, in a different domain and for a user at a different level of maturity.

>I'm saying that the common practices of programming can yield systems that are simple for anyone to maintain and work on.

And I'm saying that the history of formal verification in CS and proof assistants in mathematics provide direct, empirical evidence that this just isn't true.

Doing formal mathematics in a proof assistant (aka programming language) is hard and time-consuming. Maybe one day soon it will be practical for most mathematicians, but that day is not today.

>I'm not saying these ideas are directly applicable with no modification

People have been trying your idea since literally the dawn of Computer Science. Look up "proof assistant" in Wikipedia.

"No modification" is the under-statement of the century. Figuring out those modifications is an entire not-so-cottage industry in CS research. There are entire families of conferences dedicated to the topic.

So again, I agree with you! But you're vastly under-estimating the difficulty of what you suggest, and as a result, disregard interesting and possibly complementary approaches.


> Although it's perhaps telling that many CS papers are just as bad, if not worse, even when they come with code! Executable ideas aren't necessarily easier to understand...

I'm not too fond of academic-anything. Most papers are bad, period. Most jounals are bad, period. That's when taken in retrospect to this "good/bad" dichotomy. There are good and bad things in most of these things though that you and I can recognize.

Sadly, Computer Science in adacemia is little math. In industry it means something completely different. I think Linus is more of a computer science then someone who comes up with a new cryptography algorithm, or homomorphic encryption method, or whatever is "hip" these days.

> Things have really picked up over the past 10 years, but a vast majority of the software we use -- and a vast majority of the software being created -- does not make significant use of formal proof.

> And certainly not the sort of functional correctness verification efforts that would be analogous to formalized mathematics.

I'm not talking about formal verification when I talk about computer science/programming methods for representing ideas.

Also, formal-verification isn't an academic endevour. You've got a CPU in your system that is all formal verification of VHDL or Verilog.

You've got planes flying in the air by Boeing that are almost verified completely.

But I don't think formal verification is the most important! I think the way we represent out ideas in a clear and understandable fashon is important. My complaints are about language, not proof or semantics.

Also, in math your proof is your program so this is an entierly different issue.

> This is like saying that multivariate calculus is child's play because children can manipulate things in R3. There's a kernel of truth, but it's mostly wildly incorrect. Most Mathematicians generate proofs, not pretty simulations.

I'm not saying a highschooler is Donald Knuth. I'm saying they can benifit from the abstractions they stand on to create further experimentation for themselves. One they, I do think, they will become Donald Knuths with better understandings of these building blocks they have access to.

Do you think someone who has never touched a game engine will have an easier time understanding what a game engine does then someone who worked with them since they were in their early teens? I fully expect highschools who like math to be able to experiment with the implications of math just as currently a highschooler to likes computers can experiment easily with the implecation of computer programs.

> And I'm saying that the history of formal verification in CS and proof assistants in mathematics provide direct, empirical evidence that this just isn't true.

I'm not talking about formal verification. I'm taking about the ways we express our ideas. Computer Science != formal verification.

I am talking about how programmers express their ideas and the steps of solving a problem.

> "No modification" is the under-statement of the century.

I think your "formal verification is largely an academic endevour" takes the cake on that one.


I'm not sure what to make of these comments here. What Linus does is a mix of software engineering, project management, people management, and product management. Computer science might be a distant fifth, as is the case with most people in the industry. There's no sense in poking fun at homomorphic encryption for being "hip". It's hip because it's a result that computer scientists have been trying to discover for many years, one with tons of potential applications.

I also have to agree that formal verification is largely an academic endeavor. Static analysis is taking off these days but that's not formal verification in any sense. There are a few projects here and there that are formally verified, and certainly there are tools for formal verification, but I've met precious few people in industry who even know how to use proof assistants. This includes my experience working with aerospace clients. Formally verified components, either hardware or software, tend to be small. The first formally verified kernel, for example, is the seL4 microkernel and verification took from 2006 to 2009. Suffice it to say there are few formally verified kernels floating around.


TL;DR: If you want to do mathematics in a programming language as you suggest, then you need programming language that can express proof. Mathematics without proofs isn't mathematics. The ergonomics issues present in all of the existing languages for describing mathematical proofs are one way to motivate the work described in the article.

> I'm not too fond of academic-anything.

There's a difference between recognizing your personal tastes and being dismissive of entirely reasonable enterprises.

> Most papers are bad, period. Most jounals are bad, period.

Most software is bad, period. Most documentation of software is bad, period. Especially when written by 20-somethings straight out of university, which describes the vast majority of paper writers in academia.

...what's your point?

> I think Linus is more of a computer science then someone who comes up with a new cryptography algorithm

What benefit is there in re-defining extremely well-established terms? It doesn't actually serve to convince anyone of anything, and confuses conversations.

> I'm not talking about formal verification when I talk about computer science/programming methods for representing ideas.

So then you dismiss mathematics as a discipline? Because writing some code and some test cases is not mathematics. Modern Mathematics means proofs. Period.

Proof assistants are languages we've invented for describing proofs. They are literally exactly what you've asked for in your original post.

And they're pretty difficult to use. So exploring different approaches for describing mathematical structures and proofs makes a lot of sense.

> Also, formal-verification isn't an academic endevour. You've got a CPU in your system that is all formal verification of VHDL or Verilog.

Yes, I acknowledged that we've made tremendous progress!

But formal verification is still extremely resource intensive and extremely rare in software.

> You've got planes flying in the air by Boeing that are almost verified completely.

Where did you even get this idea? It isn't true.

Every line of code might've been run through Coverity or whatever. Some static analysis is very different from verification of functional correctness properties.

> Also, in math your proof is your program so this is an entierly different issue.

Yes, this is exactly my point! To do math like programming, you need a language for describing proofs. All of the languages that exist for doing this are difficult to use. Maybe visual languages could help out here.

> I'm saying they can benifit from the abstractions they stand on to create further experimentation for themselves.

You're tying to argue that the success of modern programming practices indicates that we can do the same in math. But 40 years of proof assistants remaining niche demonstrate this simply isn't true. It's a good idea, but it's not easy to get right.

> I'm not talking about formal verification. I'm taking about the ways we express our ideas.

In mathematics, proofs are the critical piece of output. Expressing proofs is step zero toward "expressing our ideas" in mathematics.

If you want to "do mathematics" in a language, as you suggest here, your language needs to be able to express proofs.

> Computer Science != formal verification.

And math != hacking out code.

> I think your "formal verification is largely an academic endevour" takes the cake on that one.

Okay. When was the last time you wrote production code that had a functional correctness proof? What percentage of software written yesterday would you guess or one day will have a functional correctness proof?

Or, to borrow you point about children, when was the last time a high school student wrote a proof in a proof assistant (i.e. did math in a programming language)?


When you write a function, you still need to give the arguments names. When you write an algorithm and want to share it with someone, you still need to give the variables names. That's all the greek symbols are - and mathematical notation is an abstract, human evaluateable programming language.

Common ideas are abstracted into functions, like sine, cosine - min, max, even weird things like Bessel functions, Legendre polynomials, these are all written as function(arguments) in maths.

Maths has indexing, a bunch of operators, and what the operators do often depends on what type of objects they're applying to.

Python's list and set comprehensions etc actually come from a real mathematical notation for defining sets.

All those capital sigmas and capital pi's are reduction operators - sums and products. There isn't an assignment operator - you're expected to give each variable a unique name if you have a multi-stage process, so you only ever make statements of equality rather than assignment.

Sure, it's an ambiguous language sometimes, but it sounds like your complaint comes down to standardising it more and using longer variable names.


> Sure, it's an ambiguous language sometimes, but it sounds like your complaint comes down to standardising it more and using longer variable names.

Has unstandardization and shorter variable names ever been heralded as the icon of understandability? I think that answers how I feel about that.

> When you write a function, you still need to give the arguments names. When you write an algorithm and want to share it with someone, you still need to give the variables names. That's all the greek symbols are - and mathematical notation is an abstract, human evaluateable programming language.

I get that. When I write a loop over a space of integers I sometimes use "i" or "j". When I iterate over a 2d array I usually use "x" and "y" ("z" for 3d). If I'm writing networking/file code I usually have an array of N length of bytes called "buff" or "buffer".

These are all common threads. But when I'm writing "buffer", "i", "j", "x", "y", or "z" they mean the same thing at every location in every piece of software to every software developer.

But, and here's the big but, I get a tingling feeling every time I use one of these. I always ask myself "is this the meaning I'm attempting to convey"? If there's a slight variation I'll just rename the variable to a more verbose version to make it easier to follow.

> Common ideas are abstracted into functions, like sine, cosine - min, max, even weird things like Bessel functions, Legendre polynomials, these are all written as function(arguments) in maths.

I can't name an idea in computer science that isn't a function (in at least 1 major programming language). We can go so far as to say that the basic operators, right down to define, are functions in LISP. I can on the other hand name many ideas that aren't functions in "Math"-lang and I'm sure you can as well.

> Python's list and set comprehensions etc actually come from a real mathematical notation for defining sets.

The difference is python uses, for the most part, the ASCII character set and presents a simple-for-all notation.

> All those capital sigmas and capital pi's are reduction operators - sums and products.

Why does Pi and Sigma provide a better interface to these operations then apply and reduction notation of LISP?

> There isn't an assignment operator - you're expected to give each variable a unique name if you have a multi-stage process, so you only ever make statements of equality rather than assignment.

Am I just supposed to guess the starting state of a variable? How do I define variables based on others? What type is my new variable (collection, graph, tree, value)? Why not just have a simple variable name for every non-common variable (Pi for Pi or 3.14... is fine because it is a constant everywhere).

Also, if there is a multi-stage process you should have multiple functions that define a subset of the solution, each with an appropriate name, each with appropriate "test-cases" (proofs) for that subset, each with documentation and type annotations to provide an understanding as to how it works. None of them should require domain-specific ideas to understand after you've read the "tests", docs, types, and names.


> Why does Pi and Sigma provide a better interface to these operations then apply and reduction notation of LISP?

I can answer that. You have hit the nail on the head here, that pi and sigma are in fact much better when you're doing math.

Unlike most programming languages, mathematical notation is not limited to using characters in one font and size on a series of lines, one after the other. Math notation is 2D with whatever fonts you please. So we take advantage of that. When you see a big symbol like a sigma or pi, you know that it's a higher-order operator like 'reduce' in Lisp. There's a whole family of symbols that you can put there, not just sigma and pi, but integrals, tensor products, coproducts, unions, intersections, exterior products, etc.

Or in short, that's just how mathematicians write certain types of higher-order math, by using a big font. They're all vaguely similar to "reduce" in the sense that they apply the inner expression to each element of a set and produce a single output. However, they don't translate to "reduce" in any concrete way. You can't rewrite summations in math as "reduce" in Lisp, because while they produce a single output, there's not one particular way in which that happens. For example, summations and integrals make existential claims, while unions and intersections do not.

I think that's the biggest difference between programming and math here—much math is nonconstructive, many of these operators are not total, and therefore it doesn't make sense to think of them as functions, and therefore it doesn't make sense to write them as functions—because functions are total.

To rephrase that, in math, sum() is not a function, so we don't write it using function notation. This to me is clearer—we avoid getting confused by using the same notation for two different things (functions and non-functions).


I have absolutely no idea what you're trying to say.

What do you mean with all that? Maybe I'm being obtuse but you're just saying random words and terms from programming. How on earth is that applicable to physics? I have absolutely no idea what you're trying to propose.


Well, not OP, but that was all very understandable to me, and resonated. I'll try to explain some specific points.

>Why do this rather then define an abstract, human-evaluatable, programming language?

The syntax of math is bad for a number of reasons. Making it some big 3D thing isn't going to make that syntax more reasonable.

Math is a language that could learn a lot from programming languages, even if you don't plan on running it on a computer and dealing with all that hassle.

>Making strange squigles on a page doesn't help. Making strange greek symbols on the page doesn't help. >Ditch the alphabet soup (and yes, even the 5th centry Greek alphabet soupe)

Math is optimized for paper. This makes it a lot less accessible these days. All these custom glyphs are a damn weird idea. Naming your variables "a" wouldn't be accepted in any other languages.

>Abstract common ideas into "functions".

Once you've abandoned the idea of single-letter variable names, it becomes easier to abstract common mathematical ideas into functions and build a great big library of those functions.

> Use logically derived variable names (even if they're 5-10 characters).

Naming your variables "a" wouldn't be accepted in any other languages, and from the outside looks like a pretty silly idea.

>Phi shouldn't mean 30 things depending on the field you're in.

Choose more descriptive variable names.


The incorperation of broader programming principles and concepts that have evolved over the course of advancements in the creation of software systerms into other language-based fields including mathematics and standardization of these languages to lessen ambiguities.

Things like using variable names that make sense rather then the current single-letter + subscript limit. Using easily typable characters and combining those to create complex operations. Substituting language for obtuse symbols. Simplification of expression syntax.

I don't know best how to put it in your terms, nor what I'm missing, because these are things I take for granted being a programmer.


The plethora of Greek symbols and italicized fonts for fancy operations make interesting research papers mostly unreadable for me. It would be great if mathematicians started using descriptive names for each variable (including non-obvious functions/operations), and weren't limited to one letter per symbol (i.e. I'm describing programming).

Granted, there's too much momentum to change anything, especially since it would probably slow down the people doing the fancy math.


It's not just momentum. "Non-obvious" is relative. Every mathematical paper is built on top of thousands of pieces of foundational work, how do you pick which pieces of the foundation are obvious and which are non-obvious? And how do we even use descriptive names? Descriptive names were used in older works like Philosophiæ Naturalis Principia Mathematica which are practically unreadable (even when translated into your native language) compared to more modern ways of writing things like "F=ma", "F=mx''", or "F=m d^2x/dt^2", which are fantastic to understand. Or if I'm writing something like "x+0=x", is there really some descriptive name I can use for "x"?

Or in short, we tried that and terse notation tends to be both easier to understand and more concise.


He expressed something similar to what he had said in the forementioned talk, that conventional mathematical notation is unhelpful, and that we ought to be able to express things more clearly as programs. I agreed that I find conventional mathematical notation unhelpful; when I try to read papers there are concepts I easily understand as code but I can't parse the symbolic math of. "There's too much operator overloading", I said, "and that makes it hard for me to understand in which way a symbol is being used, and papers never seem to clarify." Sussman replied, "And it's not only the operator overloading! What's that 'x' doing there! That's why we put 'let' in Scheme!"

https://dustycloud.org/blog/tag/sussman/

Looks like you are not the only great mind to think of this (although I do also think that a diagram with some text can lead to the clearest explanations)

There is a link to a video in that where Sussman explains the notation although there may be better ones also.


> abstract, human-evaluatable, programming language?

yes, it's happening: https://ncatlab.org/nlab/show/Globular

> Use logically derived variable names

I agree. It just becomes a pain in the ass to write this stuff. The greek is math-asm for the hand.

> completely standard way as to how to define all of your algorithms and logical assertions?

people are working on it, for example: https://github.com/UniMath/UniMath "This coq library aims to formalize a substantial body of mathematics using the univalent point of view."

One thing to note however is that the mathematics in the OP is inherently two (and higher) dimensional, and so it makes much more sense to adopt a language that is two (or higher) dimensional. Squashing it down into one dimensions (like with a usual language) creates a big mess.


This reminds me of the effort by Sussman to teach physics by programming : Functional differential geometry [1] was discussed on hacker news before [2].

[1] https://mitpress.mit.edu/books/functional-differential-geome... [2] https://news.ycombinator.com/item?id=7884551


You may be interested in Automated Theorem Proving:

I quote the author Geoff Sutcliffe here:

"Automated Theorem Proving (ATP) deals with the development of computer programs that show that some statement (the conjecture) is a logical consequence of a set of statements (the axioms and hypotheses). ATP systems are used in a wide variety of domains. For examples, a mathematician might prove the conjecture that groups of order two are commutative, from the axioms of group theory; a management consultant might formulate axioms that describe how organizations grow and interact, and from those axioms prove that organizational death rates decrease with age; a hardware developer might validate the design of a circuit by proving a conjecture that describes a circuit's performance, given axioms that describe the circuit itself; or a frustrated teenager might formulate the jumbled faces of a Rubik's cube as a conjecture and prove, from axioms that describe legal changes to the cube's configuration, that the cube can be rearranged to the solution state. All of these are tasks that can be performed by an ATP system, given an appropriate formulation of the problem as axioms, hypotheses, and a conjecture.

The language in which the conjecture, hypotheses, and axioms (generically known as formulae) are written is a logic, often classical 1st order logic, but possibly a non-classical logic and possibly a higher order logic. These languages allow a precise formal statement of the necessary information, which can then be manipulated by an ATP system. This formality is the underlying strength of ATP: there is no ambiguity in the statement of the problem, as is often the case when using a natural language such as English. Users have to describe the problem at hand precisely and accurately, and this process in itself can lead to a clearer understanding of the problem domain. This in turn allows the user to formulate their problem appropriately for submission to an ATP system.

The proofs produced by ATP systems describe how and why the conjecture follows from the axioms and hypotheses, in a manner that can be understood and agreed upon by everyone, even other computer programs. The proof output may not only be a convincing argument that the conjecture is a logical consequence of the axioms and hypotheses, it often also describes a process that may be implemented to solve some problem. For example, in the Rubik's cube example mentioned above, the proof would describe the sequence of moves that need to be made in order to solve the puzzle.

ATP systems are enormously powerful computer programs, capable of solving immensely difficult problems. Because of this extreme capability, their application and operation sometimes needs to be guided by an expert in the domain of application, in order to solve problems in a reasonable amount of time. Thus ATP systems, despite the name, are often used by domain experts in an interactive way. The interaction may be at a very detailed level, where the user guides the inferences made by the system, or at a much higher level where the user determines intermediate lemmas to be proved on the way to the proof of a conjecture. There is often a synergetic relationship between ATP system users and the systems themselves:

The system needs a precise description of the problem written in some logical form, the user is forced to think carefully about the problem in order to produce an appropriate formulation and hence acquires a deeper understanding of the problem, the system attempts to solve the problem, if successful the proof is a useful output, if unsuccessful the user can provide guidance, or try to prove some intermediate result, or examine the formulae to ensure that the problem is correctly described, and so the process iterates.

ATP is thus a technology very suited to situations where a clear thinking domain expert can interact with a powerful tool, to solve interesting and deep problems. Potential ATP users need not be concerned that they need to write an ATP system themselves; there are many ATP systems readily available for use. At the 1st order level some well known and successful systems are Otter, E, SPASS, Vampire, and Waldmeister. Higher order systems include ACL2, Coq, HOL, and Nqthm. For examples of problems written in classical 1st order logic, the TPTP problem library is a useful resource, which also has a simple interface for users to try out many 1st order ATP systems.

Fields where ATP has been successfully used include logic, mathematics, computer science, engineering, and social science; some outstanding successes are described below. There are potentially many more fields where ATP could be used, including biological sciences, medicine, commerce, etc. The technology is waiting for users from such fields."

Link: http://www.cs.miami.edu/~tptp/OverviewOfATP.html


Isn't it ironic that the article had not a single picture of said picture models? (You can't count the rough chalkboard sketches as 3D model, imo.) The actual linked paper does, of course.


I was also impressed by the fact that the news title talks about 3D lego models but has no images whatsoever


Quons: A 3D Language for Quantum Information Liu, Wozniakowski, & Jaffe https://www.arxiv.org/abs/1612.02630

Holographic Software for Quantum Networks Jaffe, Liu, & Wozniakowski https://arxiv.org/abs/1605.00127


Here, you'll find discussion on the development of the picture-language and some application, i.e., topological design of protocols (by Jaffe, Liu, and Wozniakowski): https://arxiv.org/abs/1611.06447


> It turns out one picture is worth 1,000 symbols

Says article with ~750 words and one useless picture


This sounds incredibly cool, but how can I see an example of the images? A simple search for "Quon examples" yielded nothing.



If you want an intermediary between the linked Lego story and the quon paper in PNAS, then check out the commentary (in PNAS): https://dl.dropboxusercontent.com/u/78485948/PNAS-2017-Biamo...


In deep learning, layer manipulation in Keras is often compared to Legos, as you can mix-and-match connections between layers and stack them pretty easily with minimal code adjustments. It makes iteration/experimentation extremely easy.


I generally keep memes off of HN*, but this one always cracks me up: http://i.imgur.com/GYdvtAW.jpg

[so I'll delete it if it gets downvoted]


I would have expected "making math more legos like" (US English) or "making maths more lego like" (British English). Instead we have a hybrid version.


it's US english, they're referring to Lego the product as a whole, rather than individual Lego pieces (the meaning ends up basically the same, but the distinction does change the grammar)


I see. My impression is that in US film and television "legos" is used almost exclusively. But it's not like I've been taking notes or anything, and I could well be wrong.


Lego is not a product. LEGO is a trademark for a brick system.


all these wonderful graphic languages exist. now it's time to put them into practice!


Cool.


University magazines are a terrible place to learn about new research because they are effectively trying to sell the research at their university as great whether it is really great or not. Looking at the arxiv link they linked I was underwhelmed compared to the title. It is not a new lego-like language for "mathematics", it's some definitions in a very particular subfield in mathematics. An accomplishment within that subfield to be sure, but not anything relevant to the wider math community and certainly not Hacker News.


My personal experience with Harvard is they are the best at over-exaggeration. Also very good at branding "been done before, but not at Harvard" research as brand new and revolutionary. Others do it too, but HU masters this.


[flagged]


> you have no idea what you're talking about, you aren't making any sense, and you're clueless

We ban accounts that comment like this, and we've warned you about it before. If you continue to post uncivil comments, we will ban you. Please (re-)read the site rules and follow them:

https://news.ycombinator.com/newsguidelines.html

https://news.ycombinator.com/newswelcome.html

We detached this subthread from https://news.ycombinator.com/item?id=13789114 and marked it off-topic.


I meant this in the most literal of ways, that the gp had no idea about the topic they were taking about, not as an insult. That being said I apologise. It wasn't my intention to break the guidelines.


Actually, he/she makes quite a bit of sense. And certainly not "clueless". (Aside: using 'with all due respect' does not automatically allow exaggerated and incorrect statements about an individual; that can be rude).

1) A mathematical "proof" can have bugs in it, just like a function. The more esoteric a proof, the less people to verify, the more likelihood for a bug. This is not a joke and happens a LOT: See this: https://en.wikipedia.org/wiki/Four_color_theorem

2) And he/she points out very correctly about the horrid use of mathematical notation in papers. Absolutely abysmal. A great paper is literally wasted because of this.


> With all due respect, you have no idea what you're talking about, you aren't making any sense, and you're clueless about what mathematics is about when you call for ideas to be abstracted into "functions", and complain about naming conventions and so on. In your eagerness to apply what you've learned about software development to a completely different field you are grossly and misunderstanding what it is all about.

You're right, I don't know much about the fields of mathematics and physics.

> posts reveal a fundamental misunderstanding of what mathematics (and physics) is about. Tip: it's not about algorithms. So don't presume you can apply techniques from software development to mathematics, just as it doesn't make sense to apply tools from physics to computer programming.

Actually quite the opposite. I do expect programming principles and tools to be able to aid in expressing ideas that can be thought about as a language. This is just visible with problems like unstandardized common variables.

The only physics heavy book I've read some of recently is Kenneth Davies Ionosphereic Radio Propagation because it relates to my work. In it, there is a few pages dedicated to just defining what all of these funny greek symbols mean because they are used in pretty much every math heavy disiplin but each field takes them to mean their own thing.

Even still with this quick referense there are large portions of the book that remain a mystery because they simply require understanding of this single field of physics common variable names and meanings to read. The chapter on VLF propegation (just after the diagram of these waves being propegated by the Earth's field lines) refers to X sub something (it's been a few days) and it relates that approximately to the plasma frequency at a region of space but it means something different.

I have no idea what that variable is without finding a plasma physicist and asking them directly what that specific niche variable name means for this specific field under this specific topic relating to this one type of equation.

Even with this simple example, I can show that it would be better to have a discriptive variable name rather then "X sub something". This is just one example from this week where this trend has bit me in the ass. If it's so easy to talk about this single situation where it's the case that a programming idea of descriptive variable names, then why don't other programming concepts fit in math?

Can you prove why this is the case?


So your qualm is about notation? That it is confusing and too terse? It is so for good reason: you couldn't write complex expressions if you used single-word identifiers instead of letters.

For example, try writing this relatively simple expression, Noether's theorem, using one-word identifiers: http://quicklatex.com/cache3/8a/ql_eb3c64b518d0f2b54ceada1f8... . It gets unwieldy. Imagine manipulating more complicated expressions and writing words instead of a simple symbol. Any effort it takes to memorize a convention is recouped several times over in ease of use.

At any rate, I fail to see how else can concepts from software development apply to mathematics and physics, besides the superficial issue of identifiers.


Since the book you are complaining about is available online (https://archive.org/details/IonosphericRadioPropagation), I had a look to see for myself.

Interestingly, the same paragraph that uses X without explanation also uses ν, which was just as mysterious to me. However, from your comment I gather that it denotes the plasma frequency. So it seems that the second variable was introduced correctly for you to understand it, while the first one was not. To use a programming metaphor, the documentation was lacking.

I found an example of notation introduced right (in my opinion) on page 405:

In order to take into account the double refraction in the ionosphere, it is convenient to use the two reflection coefficients ∥R∥ and ⟂R⟂ together with two conversion coefficients ∥R⟂ and ⟂R∥ - The first subscript denotes whether the electric field of the incident wave is parallel (∥) to or perpendicular (⟂) to the plane of incidence and the second subscript refers to the same characteristic in the reflected wave.

The variables are mnemonic (_R_eflection), vary systematically and use pictographic symbols standardized in another field, and most importantly all of this is explained.

Now you might object that all of this explanation wouldn't even be necessary if a more descriptive variable name had been used. But in programming, we frequently have to put a comment next to our variable declarations anyway, to give more context than a single word can provide.

Using single-word variable names might still be a step up for readability when writing equations. But consider that when using the equation, you will end up writing those variables over and over again, with just slight modifications in parts of the formula. And when using some equations, e.g. the matrix inversion lemma, you end up blowing up your expressions to truly gigantic size. In programming, we would just factor the expressions into more manageable chunks. But when you need the expanded form of the expression to apply some simplification, you can't really do that, short of simply not writing down the step. Mathematical notation is really better suited for this kind of frequent manipulation and rewriting.

Nonetheless, I think some of the lessons learned in programming concerning the use of dense formal languages might transfer to mathematics. One example is explicit declaration before use, which would probably have helped your problem with X just as well as a descriptive variable name. Another is the IDE experience, where hovering over a symbol gives you the contextual information you need to understand it.


I'm sorry but I failed to mention that this book has two revisions, one of which represents a HUGE change to the book. It was entierly rewritten, every diagram redone, all information re-sourced.

The paragraph I'm speaking of does not appear in the original book. Our understanding of propegation of VLF hadn't changed much but the way it is explained, and our understanding of different extremely important phenomena, are not described in the first edition. (One of the most noticable cases of this is the only mentioning of Whistlers in the first book is in relation to Exospehric events which is definetly not the case).

I'll recomment on this when I get into work and take a photo of the specific paragraph. I'll show it to you and maybe you can better describe what it is. If you've got an email in your description I'll shoot one off when I add it.

> The variables are mnemonic (_R_eflection), vary systematically and use pictographic symbols standardized in another field, and most importantly all of this is explained.

What's the mnemonic of "X"?

> Using single-word variable names might still be a step up for readability when writing equations. But consider that when using the equation, you will end up writing those variables over and over again, with just slight modifications in parts of the formula

And? I'm fine with that. If anything, as a programmer we can write IDEs for math majors. Maybe we might actually kill of LaTeX and get something that makes some sense to fill it's place.

> But when you need the expanded form of the expression to apply some simplification, you can't really do that, short of simply not writing down the step. Mathematical notation is really better suited for this kind of frequent manipulation and rewriting.

My MathIDE can with one click transform your reduced equation into an expanded form. It would also definetly possible to turn an expanded form into a reduced form.

> Nonetheless, I think some of the lessons learned in programming concerning the use of dense formal languages might transfer to mathematics. One example is explicit declaration before use, which would probably have helped your problem with X just as well as a descriptive variable name. Another is the IDE experience, where hovering over a symbol gives you the contextual information you need to understand it.

Yep I totally agree.


Ok what is F sub N and F sub L in the last paragraph of this page?

http://imgur.com/a/bRFFE


Also sent you an email, putting my response here for posterity.

First of all I'm sorry for responding so late, HN comments are somewhat of a fire-and-forget for me.

Second, I'm not much of a plasma physicist (or any kind of physicist, really). In fact, I was mostly interested in your example because I wanted to see how well I could understand it without the relevant background. Turns out, not very well. But I think I figured out what the fs are, so maybe that will help you.

Looking at the paragraph in your photo, I noticed that it mentions three kinds of frequencies: of the whistlers, of the plasma and the electron gyrofrequency. Since f is frequently used to denote frequencies in physics, I guess that f, f_N and f_L are these three. (The units of Hertz are another hint.)

Based on the description, the whistler frequency should be the smallest one. In the example, that is f. It remains to determine f_N and f_L. Unfortunately, the names are definitely not mnemonic, so I needed some other source. Turns out Wikipedia has an article on the Appleton-Hartree equation that defines all the variables it uses. (https://en.wikipedia.org/wiki/Appleton%E2%80%93Hartree_equat...)

There, f_0 and f_H are used for plasma frequency and gyro frequency. Also not mnemonic, but at least explained. I guess the relevant equation is in the section on quasi-longitudinal propagation. I simplified it even further by using the assumption that the angle theta = 0, which means that the sine is 0 and the cosine is 1. Then the equation becomes

    n^2 = 1 - X/(1 ± Y)
and since the book assumes that X and Y are much larger than unity, this can be approximated by

    n^2 = 1 ± X/Y = 1 ± (f_0^2/f^2)/(f_H/f) = 1 ± f_0^2/(f*f_H)
Further assuming that the ratio is so large that the 1 can be dropped, this gives an expression for the refractive index

    n = f_0/sqrt(f*f_H)
Matching terms with the equation in the book, f_N appears to denote the plasma frequency (f_0 on Wikipedia) and f_L is the gyro frequency (f_H on Wikipedia).

Mystery solved!

Now I completely agree with you that this kind of detective work shouldn't have been necessary and the full derivation should have been in an appendix or in a cited reference.


A wise man once told me visual image speak louder than words. Yes, this sounds incredibly awesome.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: