Hacker News new | past | comments | ask | show | jobs | submit | cevi's comments login

Think about it: with 20% tariffs, we will now have the option to work 14 hour shifts for 60c a day!

If the discrete logarithm problem is NP-hard, then I will eat my hat. The discrete logarithm problem can be solved by Shor's algorithm on a quantum computer, placing it in the complexity class BQP. Anyone who claims that BQP contains an NP-hard problem is selling something - I would bet at a trillion-to-one odds against such a claim (if there were any hope of definitively settling the problem).


There are plenty of mathematicians - mostly set theorists - who are actively working on finding new axioms of mathematics to resolve questions which can't be resolved by ZFC. Projective Determinacy is probably the most important example of a new axiom of mathematics which goes far beyond what can be proved in ZFC, but which has become widely accepted by the experts. (See [1] for some discussion about the arguments in favor of projective determinacy, and [2] for a taste of Steel's position on the subject.)

I suggest reading Kanamori's book [3] if you want to learn more about this direction. (There are plenty of recent developments in the field which occured after the publication of that book - for an example of cutting edge research into new axioms, see the paper [4] mentioned in one of the answers to [5].)

If you are only interested in arithmetic consequences of the new axioms, and if you feel that consistency statements are not too interesting (even though they can be directly interpreted as statements about whether or not certain Turing machines halt), you should check out some of the research into Laver tables [6], [7], [8], [9]. Harvey Friedman has also put a lot of work into finding concrete connections between advanced set-theoretic axioms and more concrete arithmetic statements, for instance see [10].

[1] https://mathoverflow.net/questions/479079/why-believe-in-the... [2] https://cs.nyu.edu/pipermail/fom/2000-January/003643.html [3] "The Higher Infinite: Large Cardinals in Set Theory from their Beginnings" by Akihiro Kanamori [4] "Large cardinals, structural reflection, and the HOD Conjecture" by Juan P. Aguilera, Joan Bagaria, Philipp Lücke, https://arxiv.org/abs/2411.11568 [5] https://mathoverflow.net/questions/449825/what-is-the-eviden... [6] https://en.wikipedia.org/wiki/Laver_table [7] "On the algebra of elementary embeddings of a rank into itself" by Richard Laver, https://doi.org/10.1006%2Faima.1995.1014 [8] "Critical points in an algebra of elementary embeddings" by Randall Dougherty, https://arxiv.org/abs/math/9205202 [9] "Braids and Self-Distributivity" by Patrick Dehornoy [10] "Issues in the Foundations of Mathematics" by Harvey M. Friedman, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3137697


When you get good enough at mathematics, you can tell if your proofs are correct or not without asking a TA to grade them for you. Most mathematicians reach this level before they finish undergrad (a rare few reach it before they finish high school). While AI hasn't reached this level yet, there is no fundamental barrier stopping it from happening - and for now, researchers can use formal proof-checking software like Lean, Coq, or Isabelle to act as a grader.

(In principle, it should be also be possible to get good enough at philosophy to avoid devolving into a mess of incoherence while reasoning about concepts like "knowledge", "consciousness", and "morality". I suspect some humans have achieved that, but it seems rather difficult to tell...)


> When you get good enough at mathematics, you can tell if your proofs are correct or not without asking a TA to grade them for you.

This is simply not true - you can get a very good sense of when your argument is correct, yes. But having graded for (graduate, even!) courses, even advanced students make mistakes. It's not limited to students, either; tons of textbooks have significant errata, and its not as if no retraction in math has ever been issued.

These get corrected by talking with other people - if you have an LLM spew out this synthetic chain-of-reasoning data, you probably get at least some wrong proofs, and if you blindly try to scale with this I would expect it to collapse.

Even tying into a proof-checker seems non-trivial to me. If you work purely in the proof-checker, you never say anything wrong - but the presentations in proof checking language is very different from textual ones, so I would anticipate issues of the LLM leveraging knowledge from, say, textbooks in its proofs. You might also run into issues of the AI playing a game against the compiler rather than building understanding (you see elements of this in the proofs produced by AlphaProof). And if you start mixing natural language and proof checkers, you've just kicked the verification can up the road a bit, since you need some way of ensuring the natural language actually matches the statements being shown by the proof checker.

I don't think these are insurmountable challenges, but I also don't think its as simple as the "generate synthetic data and scale harder" approach the parent comment thinks. Perhaps I'm wrong - time will tell.


The error rate of human mathematical work is not zero, but it does go down exponentially with the amount of time that the mathematician spends carefully thinking about the problem. Mistakes tend to be the result of typos, time pressure, or laziness - showing your work to others and having them check it over does help (it's one of the reasons we have peer review!), but is absolutely not necessary.

If the error rate is low enough - and by simply spending a constant factor more time finding and correcting errors, you can get it below one in a million - then you do get a virtuous feedback loop even without tying in a proof-checker. That's how humans have progressed, after all. While you are right to say that the proof-checker approach certainly is not trivial, it is currently much easier than you would expect - modern LLMs are surprisingly good at converting math written in English directly to math formalized in Lean.

I do think that LLMs will struggle to learn to catch their mistakes for a while. This is mostly because the art of catching mistakes on your own is not taught well (often it is not taught at all), and the data sets that modern LLMs train on probably contain very, very few examples of people applying this art.

A tangent: how do human mathematicians reliably manage to catch mistakes in proofs? Going line-by-line through a proof and checking that each line logically follows from the previous lines is what many people believe we do, but it is actually a method of last resort - something we only do if we are completely lost and have given up on concretely understanding what is going on. What we really do is build up a collection of concrete examples and counterexamples within a given subject, and watch how the steps of the proof play out in each of these test cases. This is why humans tend to become much less reliable at catching mistakes when they leave their field of expertise - they haven't yet built up the necessary library of examples to allow them to properly interact with the proofs, and must resort to reading line by line.


I always recommend Watrous's lecture notes: https://cs.uwaterloo.ca/~watrous/QC-notes/QC-notes.pdf

I prefer his explanation to most other explanations because he starts, right away, with an analogy to ordinary probabilities. It's easy to understand how linear algebra is related to probability (a random combination of two outcomes is described by linearly combining them), so the fact that we represent random states by vectors is not surprising at all. His explanation of the Dirac bra-ket notation is also extremely well executed. My only quibble is that he doesn't introduce density matrices (which in my mind are the correct way to understand quantum states) until halfway through the notes.


MIT's PRIMES program does exactly this - they give advanced high school students a mentor who picks out a problem, gets them up to speed on what is known, and then they work on the problem for a year and publish their results. It tends to work best with problems which have a computational aspect, so that the students can get some calculations done on the computer to get the ball rolling.


If the Hanoi pieces alternate in color, there is another very easy algorithm: always avoid putting two pieces of the same color directly on top of each other. I noticed this by accident while watching someone else solve a Hanoi puzzle with alternating colored pieces. (The person I was watching didn't know this trick either. I suspect the manufacturers of the alternatingly colored puzzle also didn't know about it. So where did the knowledge come from?)


Have you tried to read any of the literature on the Risch algorithm? If you haven't, you might want to get started by taking a look at the paper "Integration in Finite Terms" by Rosenlicht [1] and chasing down some of the references mentioned in [2].

Of course, in the real world we don't give up on integrals just because they can't be expressed in terms of elementary functions. Usually we also check if the result happens to be a hypergeometric function, such as a Bessel function. If you want to get started on understanding hypergeometric functions, maybe try reading [3] (as well as the tangentially related book "A = B" [4]).

[1] https://www.cs.ru.nl/~freek/courses/mfocs-2012/risch/Integra... [2] https://mathoverflow.net/questions/374089/does-there-exist-a... [3] https://www.math.ru.nl/~heckman/tsinghua.pdf [4] https://www2.math.upenn.edu/~wilf/AeqB.html


Reminds me of "Scheduling Algorithms for Procrastinators" [1]

[1] https://www3.cs.stonybrook.edu/~bender/newpub/2007-BenderClT...


I was thinking of the inefficient sorts that don't do illogical useless stuff. Here's a reddit thread[0] first search match I found.

[0] https://old.reddit.com/r/programming/comments/11qjga0/the_sl...


Watrous's notes get straight to the point, with a good analogy to ordinary probability theory: https://cs.uwaterloo.ca/~watrous/QC-notes/QC-notes.pdf


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: