Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Road to Self-Reproducing Machines (wsj.com)
44 points by jkuria on Sept 13, 2021 | hide | past | favorite | 62 comments



>> Self-reproducing machines could unleash the power of exponential growth, thus enabling audacious engineering projects.

Grey goo. The Borg. Skynet. I think some audacious engineering might have to be curtailed if we flesh-and-bloods want to maintain some sort of relevancy. The key may be evolution. We may need to banish unexpected variation from all self-replicating machines. Variation would have to be limited to software. No automaton would be allowed to evolve away from having an off switch.


> Grey goo. The Borg. Skynet. ...

You're reasoning from fictional evidence, like the people who wanted to ban heart and kidney transplants because they'd watched Frankenstein, or who feared computers because they'd watched 2001.

Self-reproducing machines are dangerous in the same way that ammonium perchlorate, natural gas, electricity, or computers are dangerous: people can use them to do bad things, and people can engineer systems carelessly, causing accidents.

https://en.wikipedia.org/wiki/PEPCON_disaster

https://en.wikipedia.org/wiki/New_London_School_explosion

https://en.wikipedia.org/wiki/War_of_the_currents#Safety_con...

https://electricenergyonline.com/energy/magazine/362/article...

https://en.wikipedia.org/wiki/Morris_worm

What is called for is adequate respect for the capabilities and risks of the technology you're working with, prudence in preventing accidents, and avoidance of extreme technological power imbalances like those that led to the Congo Free State and the Google Play Store, not the sort of puerile irrationality that privileges Gene Roddenberry's ideas over those of John von Neumann.


>not the sort of puerile irrationality that privileges Gene Roddenberry's ideas over those of John von Neumann.

Ah yes, the von Neumann that helped developed both the fission bomb and the hydrogen bomb and aggressively promoted the use of them and tried to start nuclear war. I know von Neumann is held in high regard with respect to his role in computing, but he is hardly a guy worthy of respect for his views on the safety of technology.


Certainly von Neumann is far from infallible, but his views on the safety of technology were based on reasoning and knowledge, not what would make exciting prime-time TV. We're literally contrasting the inventor of game theory and modern meteorology* with a TV show where sounds travel through outer space and all the extraterrestrials speak English.

This is like discussing whether Stephen Jay Gould or Jimmy Swaggart has a more credible opinion about evolution. I mean, punctuated equilibrium might or might not be correct, but you're being ridiculous.

https://whatever.scalzi.com/2009/10/13/teching-the-tech/ https://archive.is/khTlp

> when you admit that Star Trek has as much to do with plausibly extrapolated science as The A-Team has to do with a realistic look at the lives of military veterans, life gets easier. ... Meta to this is the discussion of why we have to accept that film/tv SF is riding the shortbus — there’s no actual reason it has to be that way — but let’s not get into that right at the moment.

______

* As well as, as you point out, a significant contributor to the development of the fission bomb, the hydrogen bomb, and modern computation. He was also the guy who axiomatized quantum mechanics and the foundations of mathematics, discovered continuous geometry and quantum logic and Hilbert spaces, and solved the compact-groups case of Hilbert's fifth problem. I guess you don't know much about mathematics, so that won't mean much to you; suffice it to say that the "von Neumann machine" was among the least of his achievements. Not bad for a chemical engineer.


I hadn't noticed this before, but an actual Star Trek writer posted a comment to Scalzi's post:

> Raging at Trek because its filled with rubber science is a straw man argument. When has any Star Trek show ever pretended to be scientifically accurate? Every Trek show has always been, at the core, an action-adventure drama about contemporary issues refelected off the funhouse mirror of an SF setting. It’s allegory, not extrapolation.

So, don't try to predict the consequences of new technologies by analogies to Star Trek. You'll do as well as predicting the outcome of a war by analogies to Bruce Lee movies. Same goes for Terminator, Dr. Who, and the Jetsons.

And, despite https://tvtropes.org/pmwiki/pmwiki.php/Main/MohsScaleOfScien..., the same really goes for fiction in general: fiction is literally, unashamedly, intentionally, nothing but lies. It doesn't tell you anything about reality, just about its authors' beliefs. And when we're talking about things that haven't happened yet, the authors generally don't know any more than you do; even when they have thought about the subject, as is undoubtedly the case with Star Trek's English-speaking extraterrestrials and audio-transmitting vacuum, they may prefer to portray things they know are impossible because they think they'll be more entertaining or help them achieve some other artistic goal.

Don't reason from fictional evidence. It makes you look like a fool.


I am well aware of von Neumann's works and achievements and mentioned just computing because that is what the HN crowd mostly knows him for. But really, do we want to look for ethical advice from a guy who worked on explosives for the military and then moved on to bring about the most destructive weapons that have ever existed? He then went further and helped make them even more destructive. He imo was one of the people directly responsible for the destruction of the islands in the pacific. His morals to me were clearly twisted.

I can still look up to him as an incredibly brilliant mathematician, computer scientist and engineer without conflating that with him being a good and/or wise human being.

I mostly agree about the point about not basing our views on fiction. I just wanted to point out that of all scientists, of which there are many brilliant ones, there are far better choices for sources of ideas on the ethics of technology. Smarts != Wisdom.


> But really, do we want to look for ethical advice from a guy who worked on explosives for the military and then moved on to bring about the most destructive weapons that have ever existed?

We frequently faced with questions of what we ought to do. Should we take Interstate 80 or Interstate 5? Should we eat a high-carbohydrate diet or a ketogenic diet? Should we pursue dating Barbara or Debora? Should we invest in hydrogen bombs or a larger army?

There are right answers and wrong answers to these questions, as well as answers that are in between. Which answers are right and which answers are wrong depends on two questions: first, it depends on which means will produce which results; and second, it depends on which results would be in accordance with our ends, and which would be destructive to our ends.

So, to decide whether to take Interstate 80 or Interstate 5, we must first decide whether we want to go to Santa Cruz or to Sacramento (assuming we are in San Francisco), and then consult a map. The map will tell us whether taking Interstate 80 will have the result of getting to Santa Cruz or not. Then it is of great importance whether the map is accurate and covers the area in question, and of no importance whatsoever that the mapmaker wanted to go to Sacramento, while we want to go to Santa Cruz.

Von Neumann provided an extremely accurate map of the consequences of the hydrogen bomb to the government of his time, as well as the chances of success of various ways of achieving nuclear fusion. The people that followed his advice were able to achieve historically unprecedented military power, which was their intent. If von Neumann's map had been inaccurate, they would have failed and sunken into irrelevance, like our own Project Huemul here in Argentina, which never succeeded in achieving nuclear fusion. The fact that you yourself do not want to create powerful weapons and achieve military victory is of no relevance whatsoever.

The failure of the peaceful nuclear-energy Project Huemul, particularly in the context of the rather heavy bets we had placed on it, was a significant step in Argentina's decline. It was precipitated in large part by the president emptying the academies of his political opponents, depriving his projects of the accurate and brilliant guidance he needed to make good decisions about questions of science. Perhaps, for all their smarts, they were unwise to oppose the president. (This error was repeated by later military dictatorships, further into Argentina's decline, who expelled and sometimes killed the partisans of the banished former president; the University of Buenos Aires, among others, still keenly feels the loss.)

Of course it is possible for a map to be correct about one such factual question and erroneous or silent about another; no map of Argentina will tell you how to get to Bakersfield, and an unreliable map might mislabel Interstate 80 as Interstate 70 despite correctly labeling Interstate 5. But you have given no reason to suspect that von Neumann was inaccurate about the likely results of self-reproducing machines (and, in case you haven't read him, gray goo and Skynet are not similar to what he predicted); you have merely labeled him "unwise" because, at least on questions of nuclear disarmament, he would have been your political opponent.


My understanding is that von Neumann recognized that science would produce these types of weapons eventually, and so pushed hard to reach an equilibrium state of non-use of nuclear weapons under Mutually Assured Destruction (the dude did found modern game theory). So I think it's a bit disingenuous to say that he tried to start nuclear war.


He advocated for a first strike attack (bombing Moscow). Though this was before the Soviets had a strong nuclear arsenal and so likely would not have started a war. So yeah you are right, saying he was pushing for a war is stretching it. He did push for nuking a major city though.


Here’s one from reality, bacteria. If we create self replicating machines, even if we do not intentionally introduce variation, they will vary from generation to generation. They will presumably not be indestructible and so some form of selection will be possible. If their reproduction cycle is fast enough they could potentially deviate from their intended purpose in a very short time.


Remote inspection and checksum of the device firmware can solve this problem. You should worry about intentional hacking of the firmware, i.e. self-replicating botnet.


This assumes that the firmware contains no bugs. As soon as it does, there is always the possibility of unexpected behaviour and an unintended evolutionary trajectory.


I used to be concerned about this issue, due to the obvious analogy with biological evolution, but later I realized that it's an easy risk to guard against.

It's a straightforward engineering problem to reduce the possibility of accidental program mutation to any arbitrarily low level, for example using SHA-2 (with which the chance of an undetected error is 1e-77). Of course, you can have "somatic mutations" where one part of a machine or another malfunctions, for example due to damage or errors during construction; but those don't get propagated to the next generation, so they don't produce the kind of progressive deviation you're describing.

This doesn't happen in nature for a variety of reasons, among which is that as such corrective mechanisms become progressively more perfect, the evolution of the species using them becomes progressively slower, and therefore the perfection of the error-correction mechanisms never quite arrives. Moreover, any species whose evolution becomes very slow is at a major disadvantage when the environment changes sufficiently; it will probably die out and leave no descendants after the next climate change, even something minor like an Ice Age, much less a meteor strike.

Another thing to keep in mind is that the number of generations is quite limited in practice. If, to take an unrealistically risky example, you have an alcohol-dependent nanobot replicator weighing 2 picograms, and you set it to reshaping a tonne of alcohol-soaked soil, it can't make more than 5e17 copies of itself, for which it only needs 59 generations. We aren't talking about thousands or millions of generations: even if time allows for them, space doesn't.

1e-77 (.000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 01) is a number that it can be difficult to get a handle on. If we, to take another unrealistically risky example, converted Ceres into 2-picogram nanobot replicators, there would only be 4.7e35 of them, so the chance that one of them would have a mutation in its program that SHA-2 couldn't detect would be 4e-42, assuming that there were just as many erroneous replications as correct ones. (If only one out of every 1000 nanobot firmware installations had a copying error, the chance that one of them went undetected would instead be 4e-45.) The universe is only 4.3e26 nanoseconds old.

So, human malice or extreme carelessness would be necessary.


In fact, all living things form a category of self replicating machines.

Depending, your [highly optimized] self-replicator might have some stiff competition.


The idea that a designed machine could be competitive with living things at self-replication (the sole target of a multi-millenia optimisation process) seems… pretty unlikely to me.


It doesn’t seem far fetched to me, evolution cannot see nor understand mistakes it makes. There is still plenty of room for optimization.


Grey goo is not so much fiction as it is the logic consequence of the kind of machine its creator "designed" (on the sci-fi definition of designing, but with a very strong focus on science).

If we ever build unsupervised self-replicating machines, we will get grey-goo like problems, that is a completely different threat model from any machine we have ever seen. And if the machines are any fast or any far from us, logic dictates that none of the normal safety measures we apply elsewhere will apply.

(So, I guess the lesson is "don't build unsupervised self-replicating machines". Even our body limits unsupervised self-replication by limiting how many generations a cell can create. But it is surely not "people can misuse any tool", "use it wisely" or "control the consequences".)


> Grey goo is not so much fiction as it is the logic consequence of the kind of machine its creator "designed" (on the sci-fi definition of designing, but with a very strong focus on science).

That's true; I think it's one of the reasons Zyvex's designs didn't include nanobot-sized constructors, instead preferring "convergent assembly" using nanometer-sized parts but not nanometer-sized replicators. Instead, they envisioned that the minimal self-replicating unit would be the size of a laser printer. A person armed with a rock could disable a warehouse full of such devices in a few hours if that became necessary.

So, self-replicating nanotechnology doesn't imply a risk of gray goo.

As a more useful point of reference than space opera and Drexler's thought-provoking speculations, Linux and GCC are capable of replicating themselves, but they don't pose the same risks as the Morris worm.

> that is a completely different threat model from any machine we have ever seen.

Even without the bogus gray-goo threat, that's true of self-replicating machines, but so were my other examples at the time. Well, maybe the PEPCON disaster wasn't so different from the items in https://en.wikipedia.org/wiki/List_of_ammonium_nitrate_disas... and https://en.wikipedia.org/wiki/Largest_artificial_non-nuclear..., in particular https://en.wikipedia.org/wiki/Wanggongchang_Explosion in 01626, which arguably brought about the end of the Ming 18 years later. Nothing like the Wanggongchang explosion had ever happened before, though, and nothing like it would happen again until the 20th century; quite possibly it was the largest human-made explosion before the US started the first nuclear war in 01945, though there are a couple of other candidates.


When facing new technology, fiction is our only guide. We would be fools to disregard the thinking of great minds about the potential, even likely, consequences of new capabilities. Such is the power of modern technology that we must set guidelines on things that do not yet exist, to prevent them from existing. We don't forbid research into genetic weapons because we have experience with them. We ban such things because, in our imagination, we see their potential.


We would be fools to disregard the thinking of great minds like John von Neumann, but even more fools to consider Gene Roddenberry a great mind. For the purposes of prognostication, he wasn't a mind at all; he wasn't even trying to predict what would happen in the future, but to dress up 19th-century Horatio Hornblower naval drama and 20th-century social commentary in whizbang clothing, so people would watch it and he could sell Chevrolets and vitamin pills. See https://news.ycombinator.com/item?id=28519184.


Making an evolving/learning machine stay in sync with its creators' requirements is called the alignment problem. Publications are made on this. They gave some smarter sounding names to grey goo and paperclip scenarios.

An interesting thing to note is that we know how to engineer redundancy and code checking into machines to avoid unwanted mutations. Biology makes us thing this is hard but biological systems do need to evolve, there is no pressure for lower mutations as long as you are at a level that allows a species to survive.


And what if the creators' requirements don't align with the requirements of broader society? We already know that some nation states can behave relatively responsibly with nuclear weapons but we maintain nonproliferation policies with nuclear weapons to prevent rogue nation states and terrorists from wreaking havoc on the world.

The destruction that nuclear weapons can cause is nothing compared to the potential destruction from self-replicating machines and it is very unlikely that non-proliferation of self-replicating machines will be impractical if not outright impossible.

I am fascinated by youtube channels like Primative Technologies and I am struck by the fact that once self-replicating nanotechnology is developed it is very likely that a properly motivated and well-read individual could walk into a forest with little more than the clothes on their back and walk out with a self-replicating machine, or rather float out with whatever sort of flying machine that their self-replicating nanobots would enable them to build.

Once this technology has been invented there's no stopping it. I don't know what we do then.

[1] https://www.youtube.com/channel/UCAL3JXZSzSm8AlZyD3nQdBA


So first I made a fire drill and start a fire, then dig out some clay and fire it into bricks... *some time later* ... then once the UV laser lithography unit is finished, it's time to assemble the 5nm chip fabrication module for the processing units and Voila. That's the whole thing finished!

I jest, but the thing is just because a machine is capable of self replication, that doesn't mean it's any good at making anything other than itself. Organisms are extremely good at making copies of themselves, but they're ultra-optimised for just that. Even the most sophisticated organisms are only able to make a small set of very crude intentional modifications to their environment, if any. We're the only exception.

Self replicators and omni-replicators are very different things.


Your perspective on self replicating nano bots as well as some other comments I have read here paint a picture of immense change in a short time once the technology is released into the wild. This view seems to ignore the fact that the capability to reproduce and to adapt are not the only factors. Any green and grey goo has to cope with the limitations of it's surroundings. Here are some examples: * available energy sources (food, sunlight, wind, geothermal engery, ...) * the rate of energy conversion into the energy source consumed (electricity or sugar/fats, etc). * processing power (along with it's energy efficiency) * energy consumption/efficiency when interacting with the environment * speed of reproduction * speed of adaptability (through intelligence or evolutionary techniques).

I don't wish to downplay risks involved but even if someone creates an intelligence that can outsmart them or a replicating system which they fear may over replicate the creator should be in control of enough of the environmental factors to keep it under control long enough to reduce their creations access to energy, resources. The real risk (from a perspective of self preservation) is not limiting their creation enough or overlooking sources of energy, information or whatever needs to be limited for this system.


I don't fear a run-away grey-goo scenario where the self-replicators outsmart their creators, I fear that this kind of technology could be used as an unprecedented force multiplier by nefarious individuals or small groups of people.


Nuclear proliferation is like our training wheels for all the dangerous techs that are to come. If you are worried about it, look into how world governance works, how international norms are agreed on, and take a partisan stance against the parties that block the ability to do such things.

Nuclear proliferation: so far so good.

Ozone layer destruction: disaster adverted

CO2 limitations: we would have solved that by now without GWB suddenly deciding that disbelieving in climate change was now a respectable policy position.

> The destruction that nuclear weapons can cause is nothing compared to the potential destruction from self-replicating machines

I think you overestimate the speed at which this will come. We probably could, today, make a self replicating machine from raw materials but it would be the size of a factory and would have a replication cycle of a few months. Going from self-replication to grey-goo will take some time, hopefully we can prepare to it.


And at every step of the short way, people smugly displaying their cool: 'Oh, primitive luddites and their unfounded fear of the wonders of tech. We've been living in Fukushima for 100 years, just build the damn nuclear plant on the seashore already. What could possibly go wrong? A tsunami? That's medieval superstition.'.

Took all of 25 years for a battery powered tablet to be more powerful than a room-sized supercomputer, MFLOP for MFLOP.

https://www.theregister.com/2012/03/08/supercomputing_vs_hom...


Unless you’re suggesting the shirt is itself made of replicating units, I think you’re underestimating the gap between what Primative Technologies et al can do versus the foundation needed to make even a simple microchip.

And given life is self-replicaing nanotechnology, I also think you’re overestimating its limitations, in particular speed.


Just spotted, too late to edit, it should read “overestimating its abilities, in particular speed”.


Yeah personally I feel that it's humans who are the unstable component, unable to keep to policies for millions of years, not the machines. Our weapons have immensely increased in power in the 20th century, and improvements are being researched constantly. The number of countries with nuclear capabilities is ever increasing. How do you prevent nuclear armageddon in such a world? "Don't nuke all of humanity" is a policy that most humans would agree with, but you only need a few who don't. If we want to be a civilization that survives for millions of years, we have to give up control to machines that can uphold such policies for such time spans.


Which is a known bug to over-idealize the unknown. Machines will uphold there programming, but are inflexible and will react wrong (shooting down rockets starting) in the rare situations (large asteroid heads for earth)..


Wait, do you argue that the later a species procreates, and the longer it takes, the more likely it is for long-levity and cancer suppression to co-evolve upwards?


Not just an off switch, a kill switch. Anything that can reproduce only needs a software modification to change the structure of its hardware on the next generation. If a self reproducing machine's fitness is measured only by real world conditions, and if its software is free to mutate, it will become the dominant microscopic life form very quickly. At that point maybe only an EMP would wipe it out, if it were possible to wipe it out at all. If not, so long carbon based life.


Why not just keep it in sealed hardware that can’t affect the world?

https://www.yudkowsky.net/singularity/aibox


There are ways to keep an AI in sealed hardware and making sure it can't affect the world, for instance by using an objective function that only deals with mathematics, and doesn't deal with the real world at all.

E.g. the AI is given a fixed amount of hardware and told to produce an algorithm that solves some NP-complete problem (say integer programming) in expected time as close to polytime as possible, as well as a mathematical proof that the algorithm satisfies the claimed close-to-polytime complexity bound. Then humanity can just solve the NP-complete problems separately once they have the algorithm.

This objective function doesn't care about the physical world -- it doesn't even know that a physical world exist -- and so it's about as likely to directly affect the physical world as MCTS or AlphaGo.

The "AI is going to run out of control" is a very compelling narrative (as everybody who has read the Sorcerer's Apprentice understands). But that doesn't make it true. Beware the availability heuristic.

(Incidentally, I think AI destroying mankind because it's too smart is an unlikely outcome. It's much easier for the AI to subvert the human-designed sensors linked to its objective function; and if the AI is sufficiently smart and the sensors aren't perfect, then it can always do so.)


These counterarguments are only possibly effective because you're imagining some particular kind of AI. When there is a useful AI, of course we will want it to be able to interact with people and have it control physical things in the real world. Just like existing computers do.


Given the state of the human race right now I think I'd rather give the Borg a fair shot. I'll be the first to volunteer for assimilation.


We are the dyslexic of Gorb. Fusistance is retile. Your ass will be laminated.


There are biosafety level laboratories, so why not similar labs for self-reproducing machines? But one mistake is not going to go well...

Another science fiction example: the Replicators.


With DIY biohacking we're already seeing the trickle down of the kind of work that used to require University or Government level of funding starting to take place in garages and makerspaces. It won't be long before we see the same effect with self-replicating machines.


Once you get to the last stage of this paperclip maximizer [0] game your paperclip swarm is slowly being eaten by a competing swarm.

Rabbits also reproduce endlessly until predators start hunting them or they starve because of the shortage of food.

[0] https://www.decisionproblem.com/paperclips/index2.html


Yes, but then the universe is nothing but a) paperclips, and b) some residual matter not yet turned into paperclips.


Faro plague is the most close to us and the most accurate one from scientific point of view.


Where is modern-day materials science with regard to self-replicating nanomachines? I know we aren’t there yet, but every year is closer than the one before. I have no insight into the state of the art, however.

Downstream of that question, has there been any serious study into the likelihood of a “gray goo” scenario? Or is it too far over the horizon?


I think this is still science fiction. About ten years ago a lot of the focus was on DNA-centric self-replicating structures but it didn’t go anywhere interesting. More recent attempts (example: https://news.wsu.edu/2018/03/08/self-replicating-materials-2...) are still very much early stage, focusing on exploratory fundamental research.


If you don't understand the reference to grey goo, this Wired article from Bill Joy (ex Sun microsystems) is the first place I heard it.

https://www.wired.com/2000/04/joy-2/


Good article and references some excellent books.


Hm this article is very light on details. I remember hearing about the RepRap ~15 years ago:

https://reprap.org/wiki/RepRap

It's an open source project to make a 3D printer that prints itself. I thought it was a very cool idea, but I haven't heard anything about it in 10 years.

The home page has a video from 2010, which makes me think the project is dormant. I'd be interested in updates from people who have followed more closely.

It looks like the article didn't mention any recent progress on ANY self-replicating machine project? It does feel like people get overexcited about downsides before anything actually works.


Hum... You linked to the page of the original RepRap. Yeah, it was never too popular and everybody stopped using it long ago. People were replacing it since it was published.

Last time I looked, the most popular RepRaps were variants of the Prusa i3 design (that one being more than 10 years old too). But there has been such a large explosion of designs some 5 or 6 years ago that it may be on the minority, even if it's the most popular one.

Anyway, that wiki is full of great information that is perfectly applicable today. Just because the one design that gets the name of the project was improved into an unrecognizable format, it doesn't mean the idea died.


OK but what's the best resource on what has happened / what progress has been made in the last 10-15 years? How are they better and what are the blocking issues?


Oh, I've reread your comment, and you want progress on self-replicating.

RepRap won't lead to a full self-replicating machine. It's by design. The focus on self-replicating is because the most a machine is able to build itself, the cheapest its parts will be. A 3D printer nowadays can print almost every non-commodity part of itself that isn't electronics or runs very hot. There's no progress for it to make on that dimension. It has successfully taken down the price of printers.

What progress has been made on the last decades were about things like easiness of printing, printing speed, materials handling, printer size, print volume, quality of printing.


That's from when 3d printing had the same sort of, er, enthusiasm that Blockchain (and isn't it interesting that my phone decided that needed to be capitalized) has more recently. Proponents insisting that or was going to fundamentally change the world and all that.


More recently? Bitcoin's initial release was 12 years ago, which isn't too far off from when 3D printing first started hitting the scene. 3D printing has changed the world, but because there's an expensive printer to get access to, vs Bitcoin's accessibility as pure software (you don't need to buy mining hardware up get into Bitcoin) means its reach has been limited to the corners of the world that has them.


The title brought memories in me about two quite uncomfortable dystopian short stories by Philip Dick: Second Variety and Autofac.


Where are these self-reproducing machines getting their materials? What substrate is so available that they couldn’t be contained? If they could thrive on a simple, commonplace chemical structure, while hasn’t life already done it? What energy will sustain them?


Solar can take care of the energy easily. For the materials, I would expect a prebuilt facility to autonomously mine things to provide raw materials. Easier said than done of course. Without true AI, it wouldn't be possible to make them 100% autonomous because eventually they would run out of materials.


I would think it's probably possible for there to be effective organism designs that nature has never come up with, just because there's no continuous path of gradual improvements that leads there from single-celled life.


I disagree. We are dependent on plants for food and oxygen. Specialization is key. If we build self reproducing machines they will consist of countless of sub species specializing in one task. Most of them will specialize in extraction of different resources.


This is a much bigger risk than an AI. Evolution will be in place there, and evolution will beat any kind of measure we want to set in place to control the output


Maybe we should not.


I wish this was not behind a paywall!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: