Hacker News new | past | comments | ask | show | jobs | submit login
Precise atom manipulation through deep reinforcement learning (nature.com)
118 points by PaulHoule on Dec 8, 2022 | hide | past | favorite | 31 comments



A significant amount of grad student time is sunk into being human PID loops and sensor fusion models for their experiments. Repetition makes humans dumber and/or depressed. The search space is limited to human scales and predictive models with may rely on hidden variables. These hidden variables may also necessitate more sensitive electronics to sample and introduce timing constraints on the experiment design.

It is ripe territory to create automated experimenters that generate PID loop algorithms and gradient fields without accounting for hidden or difficult-to-measure variables of the environment. These algorithms are not too complicated and we can already see it in fuzzing / concolic execution platforms for discovering security vulnerabilities. This concept can be applied across disciplines.

In this paper’s case, we this concept is applied to a robot (an STM) to control for hidden forces impacting the atoms and biasing its own output. These forces change over time. Source code, highly legible, good references for a beginner, even included the controller code… this is research!


I find this evocative of Eric Drexler’s idea of a molecular assembler but it is really moving atoms around that are weakly attracted to a surface as opposed to covalently bonded. It is like the experiment where the people at IBM spelled out IBM with Xenon atoms except these people had automation to do it.


was drexler just bullshitting or is this actually achievable?


I would say 30 years after he wrote Nanosystems there hasn’t been any progress on the tip of the assembler. I still use that book as a model though for how to talk about advanced automatic manufacturing systems. The example of life shows it is possible to construct linear structures with three different systems that can fold up in pretty arbitrary ways, but maybe you can’t make diamond that way.


Depending on whether you believe in creationism, you are a living testament to what’s possible for molecular self assembly.


This is one of the more compelling arguments to this. It at least certainly puts it into the bounds of possible. God did it, so there's a way that doesn't violate the laws of nature.

I think this touches on my main beef with a specificity of Drexler's argument: He's written that biological systems are akin to a bit of a chaotic soup of machinery, while the APM systems he proposes are neat, organized; more like a factory. This begs the questions of #1: why nature hasn't built something like his APM factories, and #2: Why shouldn't we persue something more like biology vice his version of current-tech-but-smaller APM.

Drexler uses Biology as an example of APM being feasible, but his vision is distinct from biology's.


Those are the obvious first questions, yeah.

I don't think #1 is very puzzling: can you expect a gradual continuous evolution from wiggly machines in solution, which communicate primarily by diffusion, and which must be robust to genetic variation, to a factory in hard vacuum? The path must be not just possible, but fitness-enhancing in each neighborhood (on the scale of the steps evolution took historically). In our history there was some evolution in this direction: introduction of compartments, active transport, complexes which cut out most of the diffusion step between related enzymes. But the compartments and the complexes are not qualitatively different.

It seems a harder question why life stuck with protein/bone/enamel instead of discovering lighter and stronger densely-linked carbon structural materials. Maybe because bone is continually incrementally torn down and rebuilt?

Re #2. I think anyone would agree that biology-style nanomachinery is interesting and promising. But factory-style opens up a whole new level of possibilities with orders of magnitude greater performance on multiple measures. People can pursue both! Flapping-wing flight had both scientific interest and potential applications in more-agile flying machines; that doesn't mean fixed-wing wasn't a much more strategic direction in the years around 1900.


Spider silk is a keratin protein that is plenty light and strong...


Yes, the tensile strength is like steel but lighter. (And if you ever get gored by a rhino horn you aren't going to shrug and say "hey it's just protein.") But it's a long way from the limit of strength.


The steel man version of creationism is that the laws of nature behind self-assembly are divinely designed, rather than that a god directly connects each atomic bond. I'd guess it's also the more common version.




Sounds like God uses one-letter variable names...



This is the modern version of God, which also uses Latin alphabet, not some Linear A or cuneiform.


Under conditions of omniscience and omnipotence the difference is meaningless of course but I suspect you're right.


I've been wondering this since machine learning first became blew up in the 2010s and I guess this is as good a place ask to ask as any: is there some sort of law/theory/conjecture/whatever that states that neural networks are algebraically incapable of modeling physical phenomena in general, or at least within the boundary conditions implied by the training data? For example, if the physical calculation includes imaginary numbers or some other exotic function.

I'm thinking specifically of linear motors [1] which are capable of sub-micron positioning as long as the rotor and stators can be manufactured with enough precision. Is there some fundamental problem preventing us from using an interferometric laser encoder to train a neural network to control each linear motor coming off the assembly line, thus eliminating the much of the precision manufacturing?

[1] https://en.wikipedia.org/wiki/Linear_motor


Universal approximation theorem stays that neural nets can approximate any function[0] (size of that network may need to be arbitrarily big though). I'm not aware of any restrictions/limits on that.

[0] https://en.wikipedia.org/wiki/Universal_approximation_theore...


I believe it’s any continuous function, and training has it’s own limitations.


Given that they can represent continuous functions you can lightly modify them to produce all sorts of interesting behavior like piecewise continuity as well.

The training thing is important though. There are problems which are arbitrarily likely to have all training algorithms fail, not just in a practical sense, but as a fundamental limitation.


Neural networks can approximate the function fine. The issue here is reinforcement learning works by initially random or largely random action followed by feedback to get better. This means you fail over and over, possibly millions, possibly billions of time before you ever get good. If you're learning to play chess or go or Starcraft, that's fine. If you're learning to perform high-precision industrial process control, I'm not sure you can afford that much initial waste, and it may take centuries to get through enough iterations since the process does not take place entirely inside of a computer.


They can approximate any continuous map between finite dimensional vector spaces. Complex numbers are a finite dimensional vector space.


This reminds me of schemes for isotope separation using tailored ultrashort laser pulses. Different isotopes respond slightly differently, and by arranging the pulses properly one isotope can be ionized at a much larger rate than another.


All this ML/AI talk makes me wonder if there could be an angle for something like "discovery of principles by debugging".

Imagine, for example, training a neural net to classify random numbers into prime and non-prime.

If such a model succeeds at the task, how would one understand what the basic "theory/higher level function" is which the model "discovered/learned"?

I am definitely gonna try this, just for fun.


ML explainability is a whole research area.


Thank you! I never looked into that subject before.


The interesting question is not if the answer (42) is correct or not, but how to understand how to reach that solution.


MEMS and additive manufacturing are at photon scale. This is the next step.


I imagine if deep learning is going to lead to molecular assemblers, it will more likely be in the automated design of enzymes and similar structures that could be easily mass produced by say the genetic engineering of some microbe.


How long before we get replicators?


Depending on your criteria for success, minus 40 years[0] to plus infinity[1].

[0] 3D printing

[1] the Star Trek ones would need to aggressively cool what they materialise because of the heat released as the bonds form, to an extent that some claim isn't possible, but that isn't my field and I can't comment




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: