Hacker News new | past | comments | ask | show | jobs | submit login

Why do this rather then define an abstract, human-evaluatable, programming language? Give it some S-expr syntax and make more things implicit and you're golden.

Making strange squigles on a page doesn't help. Making strange greek symbols on the page doesn't help. Writing functions, "unit tests", building on abstractions, and creating "libraries" do help. We all know this because it works well in CS. Why would this not work well in physics?

Ditch the alphabet soup (and yes, even the 5th centry Greek alphabet soupe). Abstract common ideas into "functions". Write human-provable "test-cases". Add documentation. Use logically derived variable names (even if they're 5-10 characters). Formalize the definitions and syntax of this language and it's abilities. You'd be golden.

Phi shouldn't mean 30 things depending on the field you're in.

Am I wrong? Is this not possible? Is there a reason as to why math and logic can't be expressed as an abstract programing language? Is there a reason why it wouldn't be better to have a completely standard way as to how to define all of your algorithms and logical assertions?

I could be wrong but no one has provided information as to why what we do is better. Nor does this look much better (what kmill has linked).




> Am I wrong? Is this not possible?

Yes, you're wrong.

I mean, I agree with you. This is the route we should commit to. Building up mathematical theories in a programming language.

But I think you've reached the right conclusion for horribly confused reasons, and that you severely over-estimate the state of the art in CS. This over-estimation results in you not realizing why something like this could be a valuable tool within the world you describe.

Perhaps I'm wrong, but here's what gives me this impression:

> Write human-provable "test-cases"

Since the article is discussing mathematics, I suppose here by proof you actually really honestly mean proof.

So proof by "I worked through 6 examples and they all turned out OK so far"?

That's not typically what a mathematician means when they say they proved something. Proofs can get horribly complicated in ways that test cases generally don't.

> Writing functions, "unit tests", building on abstractions, and creating "libraries" do help. We all know this because it works well in CS.

Yes, all of these things would be great.

But it's REALLY not that simple.

Formal verification of software systems remains a mostly academic pursuit. There's some promising early work, but almost none of "us" formally verify our software.

It does NOT work well in CS. Not at the moment. And most things we do in CS are conceptually simpler than a lot of mathematics. So exploring alternative interfaces makes a lot of sense.

> Is this not possible? Is there a reason as to why math and logic can't be expressed as an abstract programing language?

They can be. It's horrendously verbose, and it's difficult to do novel work in such a formal setting.

It does not work well in the same sense that it does not work well in CS -- you can do it, if you're willing to spend 10x the time/money/effort for the "good enough" version (in math, paper proofs; in CS, shippable but unverified code).


> I think the problem is that you don't understand what a real result looks like in math

Unfortunetly I've been plauged with the task of reading some math papers to understand some things in the past. I still wake up in a cold sweat some times.

> So proof by "I worked through 6 examples and they all turned out OK so far"? > That's not typically what a mathematician means when they say they proved something...

I'm well aware. That's why I included the quote marks. I mean a logical proof based on the relation of this "function's" suppositions in relation to the rest of the "functions" it is integrated with providing context and mathematical relations to the other parts of the conjecture you're working on.

> Formal verification of software systems remains a mostly academic pursuit. There's some promising early work, but almost none of "us" formally verify our software. > So no, you're wrong. It does NOT work well in CS. Not at the moment.

I mean this is just blatantly wrong and ignoring the face of what I'm saying. I'm implying that 1) if you have a pinical example of computer science you can take a freshman-level CS student, give them a tour of the language, and let them loose on a problem and they'll understand what's going on. Some of the worlds most complex simulation sytems have children in highschool writing games in them. Things like Unity and Unreal engine representing thousands and thousands of manhours to create along with entire libraries of research, industry-funded experiments, and years of expertise. I'm saying that the common practices of programming can yield systems that are simple for anyone to maintain and work on. I can't say the same for mathematics. While I understand the concepts of derivatives, integrals, and calculus if tasked I'd most likely not be able to provide a formula to define the area of a new shape despite understanding the core material far better then most programmers understand the new project they've been flung into. 2) the concepts from computer science and programming can be adapted to suit math's needs and these adaptations can be fluid where novel augmentations of the processes that yield positive results can be further integrated into mathematics. Programming and computer science are quickly evolving fields and have a unique possition in that many people engage in them without formal training from a university because in most cases it's simple unneeded. You can pick up one book and then rely on the common goal of every computer scientist and programmer that came before you: abstraction of the concepts to yield a simple solution to arbitrarily complex problems. Because of this you can jump anywhere and pick up and run with your work.

I'm not saying these ideas are directly applicable with no modification. What I am saying is "these are the ideas that make my field possible" and suggesting that standardizing practices in the Math-heavy fields would not be a bad thing especially if these same standardized practices have been used to great lengths in our field to produce a set of very positive outcomes.


TL;DR: The idea you express -- a programming-like approach to mathematics -- has been around since the dawn of Computer Science. I could even make the case that this idea pre-dates the computer by a couple hundred years.

There are literally hundreds of researchers around the world working toward this vision. But this remains a niche approach, and there are some very real hurdles to wide-spread use. Therefore, exploring different approaches is certainly justifiable.

--

> Unfortunetly I've been plauged with the task of reading some math papers to understand some things in the past. I still wake up in a cold sweat some times.

I'm with you there :-)

Although it's perhaps telling that many CS papers are just as bad, if not worse, even when they come with code! Executable ideas aren't necessarily easier to understand...

>> formal verification is a mostly academic pursuit

> I mean this is just blatantly wrong

It's really not though!

Things have really picked up over the past 10 years, but a vast majority of the software we use -- and a vast majority of the software being created -- does not make significant use of formal proof.

And certainly not the sort of functional correctness verification efforts that would be analogous to formalized mathematics.

> Some of the worlds most complex simulation sytems have children in highschool writing games in them

This is like saying that multivariate calculus is child's play because children can manipulate things in R3. There's a kernel of truth, but it's mostly wildly incorrect.

Most Mathematicians generate proofs, not pretty simulations.

Besides, and more to the point, those kids can do that precisely because those engines/frameworks/languages abstract away a lot of very difficult mathematics that they couldn't hope to understand! Which is exactly what the work described in the article could achieve, in a different domain and for a user at a different level of maturity.

>I'm saying that the common practices of programming can yield systems that are simple for anyone to maintain and work on.

And I'm saying that the history of formal verification in CS and proof assistants in mathematics provide direct, empirical evidence that this just isn't true.

Doing formal mathematics in a proof assistant (aka programming language) is hard and time-consuming. Maybe one day soon it will be practical for most mathematicians, but that day is not today.

>I'm not saying these ideas are directly applicable with no modification

People have been trying your idea since literally the dawn of Computer Science. Look up "proof assistant" in Wikipedia.

"No modification" is the under-statement of the century. Figuring out those modifications is an entire not-so-cottage industry in CS research. There are entire families of conferences dedicated to the topic.

So again, I agree with you! But you're vastly under-estimating the difficulty of what you suggest, and as a result, disregard interesting and possibly complementary approaches.


> Although it's perhaps telling that many CS papers are just as bad, if not worse, even when they come with code! Executable ideas aren't necessarily easier to understand...

I'm not too fond of academic-anything. Most papers are bad, period. Most jounals are bad, period. That's when taken in retrospect to this "good/bad" dichotomy. There are good and bad things in most of these things though that you and I can recognize.

Sadly, Computer Science in adacemia is little math. In industry it means something completely different. I think Linus is more of a computer science then someone who comes up with a new cryptography algorithm, or homomorphic encryption method, or whatever is "hip" these days.

> Things have really picked up over the past 10 years, but a vast majority of the software we use -- and a vast majority of the software being created -- does not make significant use of formal proof.

> And certainly not the sort of functional correctness verification efforts that would be analogous to formalized mathematics.

I'm not talking about formal verification when I talk about computer science/programming methods for representing ideas.

Also, formal-verification isn't an academic endevour. You've got a CPU in your system that is all formal verification of VHDL or Verilog.

You've got planes flying in the air by Boeing that are almost verified completely.

But I don't think formal verification is the most important! I think the way we represent out ideas in a clear and understandable fashon is important. My complaints are about language, not proof or semantics.

Also, in math your proof is your program so this is an entierly different issue.

> This is like saying that multivariate calculus is child's play because children can manipulate things in R3. There's a kernel of truth, but it's mostly wildly incorrect. Most Mathematicians generate proofs, not pretty simulations.

I'm not saying a highschooler is Donald Knuth. I'm saying they can benifit from the abstractions they stand on to create further experimentation for themselves. One they, I do think, they will become Donald Knuths with better understandings of these building blocks they have access to.

Do you think someone who has never touched a game engine will have an easier time understanding what a game engine does then someone who worked with them since they were in their early teens? I fully expect highschools who like math to be able to experiment with the implications of math just as currently a highschooler to likes computers can experiment easily with the implecation of computer programs.

> And I'm saying that the history of formal verification in CS and proof assistants in mathematics provide direct, empirical evidence that this just isn't true.

I'm not talking about formal verification. I'm taking about the ways we express our ideas. Computer Science != formal verification.

I am talking about how programmers express their ideas and the steps of solving a problem.

> "No modification" is the under-statement of the century.

I think your "formal verification is largely an academic endevour" takes the cake on that one.


I'm not sure what to make of these comments here. What Linus does is a mix of software engineering, project management, people management, and product management. Computer science might be a distant fifth, as is the case with most people in the industry. There's no sense in poking fun at homomorphic encryption for being "hip". It's hip because it's a result that computer scientists have been trying to discover for many years, one with tons of potential applications.

I also have to agree that formal verification is largely an academic endeavor. Static analysis is taking off these days but that's not formal verification in any sense. There are a few projects here and there that are formally verified, and certainly there are tools for formal verification, but I've met precious few people in industry who even know how to use proof assistants. This includes my experience working with aerospace clients. Formally verified components, either hardware or software, tend to be small. The first formally verified kernel, for example, is the seL4 microkernel and verification took from 2006 to 2009. Suffice it to say there are few formally verified kernels floating around.


TL;DR: If you want to do mathematics in a programming language as you suggest, then you need programming language that can express proof. Mathematics without proofs isn't mathematics. The ergonomics issues present in all of the existing languages for describing mathematical proofs are one way to motivate the work described in the article.

> I'm not too fond of academic-anything.

There's a difference between recognizing your personal tastes and being dismissive of entirely reasonable enterprises.

> Most papers are bad, period. Most jounals are bad, period.

Most software is bad, period. Most documentation of software is bad, period. Especially when written by 20-somethings straight out of university, which describes the vast majority of paper writers in academia.

...what's your point?

> I think Linus is more of a computer science then someone who comes up with a new cryptography algorithm

What benefit is there in re-defining extremely well-established terms? It doesn't actually serve to convince anyone of anything, and confuses conversations.

> I'm not talking about formal verification when I talk about computer science/programming methods for representing ideas.

So then you dismiss mathematics as a discipline? Because writing some code and some test cases is not mathematics. Modern Mathematics means proofs. Period.

Proof assistants are languages we've invented for describing proofs. They are literally exactly what you've asked for in your original post.

And they're pretty difficult to use. So exploring different approaches for describing mathematical structures and proofs makes a lot of sense.

> Also, formal-verification isn't an academic endevour. You've got a CPU in your system that is all formal verification of VHDL or Verilog.

Yes, I acknowledged that we've made tremendous progress!

But formal verification is still extremely resource intensive and extremely rare in software.

> You've got planes flying in the air by Boeing that are almost verified completely.

Where did you even get this idea? It isn't true.

Every line of code might've been run through Coverity or whatever. Some static analysis is very different from verification of functional correctness properties.

> Also, in math your proof is your program so this is an entierly different issue.

Yes, this is exactly my point! To do math like programming, you need a language for describing proofs. All of the languages that exist for doing this are difficult to use. Maybe visual languages could help out here.

> I'm saying they can benifit from the abstractions they stand on to create further experimentation for themselves.

You're tying to argue that the success of modern programming practices indicates that we can do the same in math. But 40 years of proof assistants remaining niche demonstrate this simply isn't true. It's a good idea, but it's not easy to get right.

> I'm not talking about formal verification. I'm taking about the ways we express our ideas.

In mathematics, proofs are the critical piece of output. Expressing proofs is step zero toward "expressing our ideas" in mathematics.

If you want to "do mathematics" in a language, as you suggest here, your language needs to be able to express proofs.

> Computer Science != formal verification.

And math != hacking out code.

> I think your "formal verification is largely an academic endevour" takes the cake on that one.

Okay. When was the last time you wrote production code that had a functional correctness proof? What percentage of software written yesterday would you guess or one day will have a functional correctness proof?

Or, to borrow you point about children, when was the last time a high school student wrote a proof in a proof assistant (i.e. did math in a programming language)?


When you write a function, you still need to give the arguments names. When you write an algorithm and want to share it with someone, you still need to give the variables names. That's all the greek symbols are - and mathematical notation is an abstract, human evaluateable programming language.

Common ideas are abstracted into functions, like sine, cosine - min, max, even weird things like Bessel functions, Legendre polynomials, these are all written as function(arguments) in maths.

Maths has indexing, a bunch of operators, and what the operators do often depends on what type of objects they're applying to.

Python's list and set comprehensions etc actually come from a real mathematical notation for defining sets.

All those capital sigmas and capital pi's are reduction operators - sums and products. There isn't an assignment operator - you're expected to give each variable a unique name if you have a multi-stage process, so you only ever make statements of equality rather than assignment.

Sure, it's an ambiguous language sometimes, but it sounds like your complaint comes down to standardising it more and using longer variable names.


> Sure, it's an ambiguous language sometimes, but it sounds like your complaint comes down to standardising it more and using longer variable names.

Has unstandardization and shorter variable names ever been heralded as the icon of understandability? I think that answers how I feel about that.

> When you write a function, you still need to give the arguments names. When you write an algorithm and want to share it with someone, you still need to give the variables names. That's all the greek symbols are - and mathematical notation is an abstract, human evaluateable programming language.

I get that. When I write a loop over a space of integers I sometimes use "i" or "j". When I iterate over a 2d array I usually use "x" and "y" ("z" for 3d). If I'm writing networking/file code I usually have an array of N length of bytes called "buff" or "buffer".

These are all common threads. But when I'm writing "buffer", "i", "j", "x", "y", or "z" they mean the same thing at every location in every piece of software to every software developer.

But, and here's the big but, I get a tingling feeling every time I use one of these. I always ask myself "is this the meaning I'm attempting to convey"? If there's a slight variation I'll just rename the variable to a more verbose version to make it easier to follow.

> Common ideas are abstracted into functions, like sine, cosine - min, max, even weird things like Bessel functions, Legendre polynomials, these are all written as function(arguments) in maths.

I can't name an idea in computer science that isn't a function (in at least 1 major programming language). We can go so far as to say that the basic operators, right down to define, are functions in LISP. I can on the other hand name many ideas that aren't functions in "Math"-lang and I'm sure you can as well.

> Python's list and set comprehensions etc actually come from a real mathematical notation for defining sets.

The difference is python uses, for the most part, the ASCII character set and presents a simple-for-all notation.

> All those capital sigmas and capital pi's are reduction operators - sums and products.

Why does Pi and Sigma provide a better interface to these operations then apply and reduction notation of LISP?

> There isn't an assignment operator - you're expected to give each variable a unique name if you have a multi-stage process, so you only ever make statements of equality rather than assignment.

Am I just supposed to guess the starting state of a variable? How do I define variables based on others? What type is my new variable (collection, graph, tree, value)? Why not just have a simple variable name for every non-common variable (Pi for Pi or 3.14... is fine because it is a constant everywhere).

Also, if there is a multi-stage process you should have multiple functions that define a subset of the solution, each with an appropriate name, each with appropriate "test-cases" (proofs) for that subset, each with documentation and type annotations to provide an understanding as to how it works. None of them should require domain-specific ideas to understand after you've read the "tests", docs, types, and names.


> Why does Pi and Sigma provide a better interface to these operations then apply and reduction notation of LISP?

I can answer that. You have hit the nail on the head here, that pi and sigma are in fact much better when you're doing math.

Unlike most programming languages, mathematical notation is not limited to using characters in one font and size on a series of lines, one after the other. Math notation is 2D with whatever fonts you please. So we take advantage of that. When you see a big symbol like a sigma or pi, you know that it's a higher-order operator like 'reduce' in Lisp. There's a whole family of symbols that you can put there, not just sigma and pi, but integrals, tensor products, coproducts, unions, intersections, exterior products, etc.

Or in short, that's just how mathematicians write certain types of higher-order math, by using a big font. They're all vaguely similar to "reduce" in the sense that they apply the inner expression to each element of a set and produce a single output. However, they don't translate to "reduce" in any concrete way. You can't rewrite summations in math as "reduce" in Lisp, because while they produce a single output, there's not one particular way in which that happens. For example, summations and integrals make existential claims, while unions and intersections do not.

I think that's the biggest difference between programming and math here—much math is nonconstructive, many of these operators are not total, and therefore it doesn't make sense to think of them as functions, and therefore it doesn't make sense to write them as functions—because functions are total.

To rephrase that, in math, sum() is not a function, so we don't write it using function notation. This to me is clearer—we avoid getting confused by using the same notation for two different things (functions and non-functions).


I have absolutely no idea what you're trying to say.

What do you mean with all that? Maybe I'm being obtuse but you're just saying random words and terms from programming. How on earth is that applicable to physics? I have absolutely no idea what you're trying to propose.


Well, not OP, but that was all very understandable to me, and resonated. I'll try to explain some specific points.

>Why do this rather then define an abstract, human-evaluatable, programming language?

The syntax of math is bad for a number of reasons. Making it some big 3D thing isn't going to make that syntax more reasonable.

Math is a language that could learn a lot from programming languages, even if you don't plan on running it on a computer and dealing with all that hassle.

>Making strange squigles on a page doesn't help. Making strange greek symbols on the page doesn't help. >Ditch the alphabet soup (and yes, even the 5th centry Greek alphabet soupe)

Math is optimized for paper. This makes it a lot less accessible these days. All these custom glyphs are a damn weird idea. Naming your variables "a" wouldn't be accepted in any other languages.

>Abstract common ideas into "functions".

Once you've abandoned the idea of single-letter variable names, it becomes easier to abstract common mathematical ideas into functions and build a great big library of those functions.

> Use logically derived variable names (even if they're 5-10 characters).

Naming your variables "a" wouldn't be accepted in any other languages, and from the outside looks like a pretty silly idea.

>Phi shouldn't mean 30 things depending on the field you're in.

Choose more descriptive variable names.


The incorperation of broader programming principles and concepts that have evolved over the course of advancements in the creation of software systerms into other language-based fields including mathematics and standardization of these languages to lessen ambiguities.

Things like using variable names that make sense rather then the current single-letter + subscript limit. Using easily typable characters and combining those to create complex operations. Substituting language for obtuse symbols. Simplification of expression syntax.

I don't know best how to put it in your terms, nor what I'm missing, because these are things I take for granted being a programmer.


The plethora of Greek symbols and italicized fonts for fancy operations make interesting research papers mostly unreadable for me. It would be great if mathematicians started using descriptive names for each variable (including non-obvious functions/operations), and weren't limited to one letter per symbol (i.e. I'm describing programming).

Granted, there's too much momentum to change anything, especially since it would probably slow down the people doing the fancy math.


It's not just momentum. "Non-obvious" is relative. Every mathematical paper is built on top of thousands of pieces of foundational work, how do you pick which pieces of the foundation are obvious and which are non-obvious? And how do we even use descriptive names? Descriptive names were used in older works like Philosophiæ Naturalis Principia Mathematica which are practically unreadable (even when translated into your native language) compared to more modern ways of writing things like "F=ma", "F=mx''", or "F=m d^2x/dt^2", which are fantastic to understand. Or if I'm writing something like "x+0=x", is there really some descriptive name I can use for "x"?

Or in short, we tried that and terse notation tends to be both easier to understand and more concise.


He expressed something similar to what he had said in the forementioned talk, that conventional mathematical notation is unhelpful, and that we ought to be able to express things more clearly as programs. I agreed that I find conventional mathematical notation unhelpful; when I try to read papers there are concepts I easily understand as code but I can't parse the symbolic math of. "There's too much operator overloading", I said, "and that makes it hard for me to understand in which way a symbol is being used, and papers never seem to clarify." Sussman replied, "And it's not only the operator overloading! What's that 'x' doing there! That's why we put 'let' in Scheme!"

https://dustycloud.org/blog/tag/sussman/

Looks like you are not the only great mind to think of this (although I do also think that a diagram with some text can lead to the clearest explanations)

There is a link to a video in that where Sussman explains the notation although there may be better ones also.


> abstract, human-evaluatable, programming language?

yes, it's happening: https://ncatlab.org/nlab/show/Globular

> Use logically derived variable names

I agree. It just becomes a pain in the ass to write this stuff. The greek is math-asm for the hand.

> completely standard way as to how to define all of your algorithms and logical assertions?

people are working on it, for example: https://github.com/UniMath/UniMath "This coq library aims to formalize a substantial body of mathematics using the univalent point of view."

One thing to note however is that the mathematics in the OP is inherently two (and higher) dimensional, and so it makes much more sense to adopt a language that is two (or higher) dimensional. Squashing it down into one dimensions (like with a usual language) creates a big mess.


This reminds me of the effort by Sussman to teach physics by programming : Functional differential geometry [1] was discussed on hacker news before [2].

[1] https://mitpress.mit.edu/books/functional-differential-geome... [2] https://news.ycombinator.com/item?id=7884551


You may be interested in Automated Theorem Proving:

I quote the author Geoff Sutcliffe here:

"Automated Theorem Proving (ATP) deals with the development of computer programs that show that some statement (the conjecture) is a logical consequence of a set of statements (the axioms and hypotheses). ATP systems are used in a wide variety of domains. For examples, a mathematician might prove the conjecture that groups of order two are commutative, from the axioms of group theory; a management consultant might formulate axioms that describe how organizations grow and interact, and from those axioms prove that organizational death rates decrease with age; a hardware developer might validate the design of a circuit by proving a conjecture that describes a circuit's performance, given axioms that describe the circuit itself; or a frustrated teenager might formulate the jumbled faces of a Rubik's cube as a conjecture and prove, from axioms that describe legal changes to the cube's configuration, that the cube can be rearranged to the solution state. All of these are tasks that can be performed by an ATP system, given an appropriate formulation of the problem as axioms, hypotheses, and a conjecture.

The language in which the conjecture, hypotheses, and axioms (generically known as formulae) are written is a logic, often classical 1st order logic, but possibly a non-classical logic and possibly a higher order logic. These languages allow a precise formal statement of the necessary information, which can then be manipulated by an ATP system. This formality is the underlying strength of ATP: there is no ambiguity in the statement of the problem, as is often the case when using a natural language such as English. Users have to describe the problem at hand precisely and accurately, and this process in itself can lead to a clearer understanding of the problem domain. This in turn allows the user to formulate their problem appropriately for submission to an ATP system.

The proofs produced by ATP systems describe how and why the conjecture follows from the axioms and hypotheses, in a manner that can be understood and agreed upon by everyone, even other computer programs. The proof output may not only be a convincing argument that the conjecture is a logical consequence of the axioms and hypotheses, it often also describes a process that may be implemented to solve some problem. For example, in the Rubik's cube example mentioned above, the proof would describe the sequence of moves that need to be made in order to solve the puzzle.

ATP systems are enormously powerful computer programs, capable of solving immensely difficult problems. Because of this extreme capability, their application and operation sometimes needs to be guided by an expert in the domain of application, in order to solve problems in a reasonable amount of time. Thus ATP systems, despite the name, are often used by domain experts in an interactive way. The interaction may be at a very detailed level, where the user guides the inferences made by the system, or at a much higher level where the user determines intermediate lemmas to be proved on the way to the proof of a conjecture. There is often a synergetic relationship between ATP system users and the systems themselves:

The system needs a precise description of the problem written in some logical form, the user is forced to think carefully about the problem in order to produce an appropriate formulation and hence acquires a deeper understanding of the problem, the system attempts to solve the problem, if successful the proof is a useful output, if unsuccessful the user can provide guidance, or try to prove some intermediate result, or examine the formulae to ensure that the problem is correctly described, and so the process iterates.

ATP is thus a technology very suited to situations where a clear thinking domain expert can interact with a powerful tool, to solve interesting and deep problems. Potential ATP users need not be concerned that they need to write an ATP system themselves; there are many ATP systems readily available for use. At the 1st order level some well known and successful systems are Otter, E, SPASS, Vampire, and Waldmeister. Higher order systems include ACL2, Coq, HOL, and Nqthm. For examples of problems written in classical 1st order logic, the TPTP problem library is a useful resource, which also has a simple interface for users to try out many 1st order ATP systems.

Fields where ATP has been successfully used include logic, mathematics, computer science, engineering, and social science; some outstanding successes are described below. There are potentially many more fields where ATP could be used, including biological sciences, medicine, commerce, etc. The technology is waiting for users from such fields."

Link: http://www.cs.miami.edu/~tptp/OverviewOfATP.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: