Hacker News new | past | comments | ask | show | jobs | submit login

Surprised you didnt go into something AI adjacent





I don't know what his reasons are but it makes sense to me. Yes, there are incredible results coming out of the AI world but the methods aren't necessarily that interesting (i.e. intellectually stimulating) and it can be frustrating working in a field with this much noise.

I don't want to come across as too harsh but having studied machine learning since 2015 I find the most recent crop of people excited about working on AI are deep in Dunning-Kruger. I think I conflate this a bit with the fascination of results over process (I suppose that befuddlement is what led me to physics over engineering) but working in ML research for so long it's hard to gin up a perspective that these things are actually teleologically useful, and not just randomly good enough most of the time to keep up the illusion.

What do you mean by "things that are actually teleologically useful"?

Fellow physicist here by the way


Like useful in an intentional way: purpose-built and achieves success via accurate, parsimonious models. The telos here being the stated goal of a structurally sound agent that can emulate a human being, as opposed to the accidental, max-entropy implementations we have today.

Sounds like an arbitrary telos, especially in a world where one of the most useful inventions in human existence has been turning dead dinosaurs into flying metal containers to transport ourselves great distances in.

Every goal is equally arbitrary, I'm speaking to the assumed ideology of the AI fanatics.

Is a guide dog teleologically useful?

Not if you’re taste testing ceviche

I see, so humans are also not usefully intelligent in an intentional way, because they also follow the 2nd law of thermodynamics and maximize entropy and aren't deterministic?

Pure, refined “but humans also”.

What do you mean by "Pure, refined"?

You're right that "but humans also" is better than my "and humans also"


Not OP, but I'm assuming he means that they are maddeningly black-boxy, if you want to know how the sausage is made.

I feel that way sometimes too.

But then I think about how maddeningly unpredictable human thought and perception is, with phenomena like optical illusions, cognitive biases, a limited working memory. Yet it is still produces incredibly powerful results.

Not saying ML is anywhere near humans yet, despite all the recent advances, but perhaps a fully explainable AI system, with precise logic, 100% predictable, isn’t actually needed to get most of what we need out of AI. And given the “analog” nature of the universe maybe it’s not even possible to have something perfect.


> But then I think about how maddeningly unpredictable human thought and perception is, with phenomena like optical illusions, cognitive biases, a limited working memory.

I agree with your general point (I think), but I think that "unpredictable" is really the wrong word here. Optical illusions, cognitive biases and limited working memory are mostly extremely predictable, and make perfect sense if you look at the role that evolution played in developing the human mind. E.g. many optical illusions are due to the fact that the brain needs to recreate a 3-D model from a 2-D image, and it has to do this by doing what is statistically most likely in the world we live in (or, really, the world of African savannahs where humans first evolved and walked upright). This, it's possible to "tricks" this system by creating a 2D image from a 3D set of objects that is statistically unlikely in the natural world.

FWIW Stephen Pinker's book "How the Mind Works" has a lot of good examples of optical illusions and cognitive biases and the theorized evolutionary bases for these things.


Lean is AI adjacent.

Only because the AI people find it interesting. It's not really AI in itself.

If you’re interested in applications of AI to mathematics, you’re faced with the problem of what to do when the ratio of plausible proofs to humans that can check them radically changes. There are definitely some in the AI world who feel that the existing highly social construct of informal mathematical proof will remain intact, just with humans replaced by agents, but amongst mathematicians there is a growing realization that formalization is the best way to deal with this epistemological crisis.

It helps that work done in Lean (on Mathlib and other developments) is reaching an inflection point just as these questions become practically relevant from AI.


It's not AI in itself, but it's one of the best possibilities for enabling AI systems to generate mathematical proofs that can be automatically verified to be correct, which is needed at the scale they can potentially operate.

Of course it has many non-AI uses too.


Proof automation definitely counts as AI. Not all AI is based on machine learning or statistical methods, GOFAI is a thing too.

If you want to have superhuman performance like AlphaZero series you need a verifier (valuation network) to tell you if you are on the right track. Lean (proof checker) in general can act as a trusted critic.

They do have AI on their roadmap, though: https://lean-fro.org/about/roadmap-y2/

Seems more like applying LEAN to AI development, no?

Partially, I guess, but also: "We will seek to provide tooling, data, and other support that enables AI organizations and researchers to advance Lean’s contribution at the intersection of AI, math, and science."

It's not ML, but it is AI

This is *VERY* AI-adjacent... the next batch of AI algos will need to integrate reasoning through theorem provers to go next level



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: