But we can devise generally applicable rules of reasoning from first principles. It's called logic. I am pretty sure the next step is to properly combine machine learning and logic properly.
Seems unlikely, that never worked in the past. And humans don't actually use logic (especially formal logic) to come up with anything. They just use it to justify what they came up with.
Not even mathematicians think in terms of logic when trying to solve problems.
Of course mathematicians also think in terms of logic. It’s what you learn when you study mathematics, you soak it up automatically, although few study logic explicitly. And before 2015 a machine beating worlds best go player also seemed pretty unlikely.
I've studied mathematics, too. Yes, formal logic is an afterthought when you do mathematics. But formal logic is just an explicit representation of what goes on internally in a mathematician. Or at least that's how I approach formal logic (most logicians don't). I would describe these internal processes inside a mathematician (and outside, when used for communication) as intuition + "logic to keep intuition in check". Sounds like ML + logic to me.
There are already tons of systems (for example Google Translate) that combine rule-based reasoning with probabilistic reasoning. Looks to be working to me.
It doesn't look like they are still using any rule-based reasoning?
The blog post says:
> With this update, Google Translate is improving more in a single leap than we’ve seen in the last ten years combined. [...]
Which seems pretty strong evidence to me that moving away from rule-based reasoning or even a hybrid approach that includes rule-based reasoning, was a clear win?
First principles don't work in the space of systems geared towards extreme generalization such as LLMs. You need to be ready to compare anything with anything and build bridges between many principles. In fact there is a deep link between the progress of structuralism in mathematics culminating with homotopy type theory and its parallel (r)evolution in the humanities with the discovery of manuscripts by the founder of structural linguistics, Ferdinand de Saussure.
Identity is what provides the irreducible basis, in the sense that we cannot enter into the consideration of specific facts that are placed under this identity, and it is this identity that becomes for us the true concrete fact, beyond which there is nothing more.
...
For example, for a musical composition, compared to a painting. Where does a musical composition exist? It is the same question as to know where 'aka' exists. In reality, this composition only exists when it is performed; but to consider this performance as its existence is false. Its existence is the identity of the performances.
...
For each of the things we have considered as a truth, we have arrived through so many different paths that we confess we do not know which one should be preferred. To properly present the entirety of our propositions, it would be necessary to adopt a fixed and defined starting point. But what we are trying to establish is that it is false to admit in linguistics a single fact as defined in itself. There is, therefore, a necessary absence of any starting point, and if some reader is willing to follow our thoughts carefully from one end to the other of this volume, they will recognize, we are convinced, that it was, so to speak, impossible to follow a very rigorous order. We will allow ourselves to present, up to three or four times in different forms, the same idea to the reader because there really is no starting point more appropriate than another on which to base the demonstration.
...
As language offers no substance under any of its manifestations, but only combined or isolated actions of physiological, physical, and mental forces, and as nevertheless all our distinctions, our terminology, and all our ways of speaking are based on this involuntary assumption of a substance, we cannot refuse, first and foremost, to recognize that the most essential task of the theory of language will be to untangle what our primary distinctions are all about.
...
There are different types of identity. This is what creates different orders of linguistic facts. Outside of any identity relationship, a linguistic fact does not exist. However, the identity relationship depends on a variable point of view that one decides to adopt; therefore, there is no rudiment of a linguistic fact outside the defined point of view that presides over distinctions.
There is no reason why logic cannot follow various different threads of reasoning, interweave them, merge them, split them again, etc. Logic constitutes a first principle of utmost generality, actually I cannot imagine anything more general. Identity is not equivalent to equivalence, equivalence is a quotient of identity, consisting of two classes: Those values which are identical to True, and those which are not.
When talking about identity/equivalence of types in the context of homotopy type theory, yes. This is literally what the univalence axiom states.
Auggierose, I'm curious about your thoughts on how we can provide more rigor to LLMs when it comes to large-scale program transformations and proof synthesis. Given the complexity and versatility of these systems, what kind of foundational framework do you believe would enable GPT and similar models to synthesize and execute proofs rigorously? How can we ensure that they are both reliable and adaptable while dealing with various mathematical and logical domains?
More importantly, how whould this relate to NLP tasks such as: alright, the story is good, but can you rewrite it in the style of Auggierose ?