Star Trek depicted one of the scarier applications of such a technology: There, The Jem'Hadaar are a genetically engineered race of warriors controlled by the Dominion, who exert power over them by being the sole provider of a substance (Ketracel-White) that they need in order to survive:
If my reading of the paper is correct, the unnatural XY base pair doesn't have any function per se, but without it the organism fails to replicate its DNA and quickly dies. Hence you can argue that after introducing it, the XY base pair has a vital function, as without it the organism can't survive anymore. And since the organism has no way to synthesize X & Y itself, it needs to be supplied from the outside for its whole lifetime. This, in turn, provides a neat way to build "self-terminating" organisms that depend on a steady supplement of the given substance. One could of course argue that vitamins are exactly the same, but contrary to those the artificial base pairs would not be found in the wild, so supply could be controlled more easily.
It's good as long self-termination is a nice-to-have feature. If you REALLY want them to self-terminate but you found out that they mutated and can now synthesize X & Y, you're in trouble.
Genetically modified organisms are a bit scary, but genetically modified organisms that replicate autonomously are more than a bit scary. As programmers - a profession that managed to spread Shellshock, the dumbest imaginable backdoor, so widely as to make much of human population vulnerable, we should understand this better than others. We have no idea what happens in our own code, code that we copy from place to place ourselves, written in languages we designed and running on machines we designed, and there's nothing like mutation and self-replication with this stuff, for the most part. I kinda doubt that biologists modifying "code" which is actually pretty large molecules existing and interacting in a 3D space with a lot of forces pulling and pushing things, and without the benefit of comments or design documents, can make something that can be trusted to autonomously replicate and gradually mutate.
We have no idea what happens in our own code, code that we copy from place to place ourselves, written in languages we designed and running on machines we designed, and there's nothing like mutation and self-replication with this stuff, for the most part.
Documented: Computer viruses mutating in the wild by two different viruses accidentally copying themselves into overlapping regions of memory.
Interesting. When has that happened? It seems awfully unlikely to create "viable offspring" that way, but I suppose most biological mutations are neutral or negative as well...
> If you REALLY want them to self-terminate but you found out that they mutated and can now synthesize X & Y, you're in trouble.
There's a good hook for a sci-fi story. Some low-level manager in a biochem company starts siphoning off XY juice to sell on the black market, and doesn't realise that by slowly ramping up their thievery they're evolving the killswitch out of their engineered organisms...
> It's good as long self-termination is a nice-to-have feature. If you REALLY want them to self-terminate but you found out that they mutated and can now synthesize X & Y, you're in trouble.
That's the premise of Jurassic Park 1, except with an amino acid.
>It's good as long self-termination is a nice-to-have feature. If you REALLY want them to self-terminate but you found out that they mutated and can now synthesize X & Y, you're in trouble.
Also, if they get powerful enough that they can take over your synthesis facility...
That's an overstatement. Sure code can be very complex but we actually can see exactly what's going on whenever we really want to. It's just that we cannot see what's going on with all the code all the time.
I think that the argument is that I could know, pretty exactly, not only when (or if) this piece of code terminates, but also how much iterations it's going to take. Sure, it's much easier (in this case) to simply run the thing, experiment with some tweaks a bit and come to some conclusion (like biologists supposedly do?). But I could also go read CPython (assuming CPython) implementation of floating point arithmetic, eventually dropping down to assembly and the workings of an FPU unit, while at the same time I could also take an analytic approach, treating this as a well-defined mathematical problem of summing a series (I think? sorry, I'm personally not that good on that front... but the option is there!). I think biologists are constrained only to the first, experimental approach - or that's how I understand the argument, at least.
What you are saying sounds intuitively true, but the mathematics of Chaos Theory showed that it is not actually the case.
What my code snippet is doing is running a sample of a particular chaotic function that was originally inspired by biology (a simple predator/prey model). Ultimately, what happens is that you just cannot predict how the function will behave - it is chaotic.
Ultimately most complex systems start to show some chaotic behaviour, which basically means that the behaviour of the system cannot be predicted in detail, even if virtually everything is known about the system in advance.
No, virtually is the important bit. In code we can know all the details. So, you can predict the code just fine. For example running it twice and getting the same output the second time. Chaos theory is based on real world systems that can't be measured accurately. However, internal computer simulations don't have that limitation.
One example is if you try saving a simulation to disk you need to copy all internal state or you get a different output.
You're right (and I think commenters saying that parent was talking about halting problem are wrong - it was clearly about chaotic systems, which are deterministic). And yet, this doesn't make the original statement - "we have no idea what happens in our own code" - any less true. Without even touching the halting problem, all code we write quickly explodes in complexity beyond our ability to mentally simulate it, and those of us who don't work on life critical systems are not paid for anything more than some unit tests and a cursory "uh, it looks ok to me" assessment.
No, the parent is actually talking about the Halting Problem. It has been proven impossible to predict when a program will exit without actually executing the program. (except in a few trivial edge cases)
Minor detail, but the undecidability Halting Problem strictly requires an infinite tape (infinite storage space).
In "practice", if you could call it such, a computer with limited memory becomes a "linear bounded automaton" and the Halting Problem is decidable; cf. http://cs.stackexchange.com/q/22925
Of course, big enough memory can mean that it can be impossible to detect termination before the heat death of the universe - we are talking theory here.
actually, the proof for the halting problem is rather esoteric in its demonstration. The claim isn't that you can't predict when most programs will end, but that it's possible to construct a scenario where the program goes out of its way to be unpredictable by running the prediction algorithm on itself and inverting the answer using an infinite loop.
No, I don't think you quite get it. What OP meant, in more rigorous terms, is: write me a function that takes "b" as an argument and returns whether the function will terminate or not when I replace 3.59 with "b". As it turns out, this is not at all simple. But you don't even need to go there. For some values of "b", the program doesn't terminate, and it's not trivial to show it. What are you going to do? Run the program forever? You can never prove it won't terminate in the next iteration.
Machine has finite precision, as soon as it repeats a previous state it's in an infinite loop. And no you don't need to run this on an arbitrarily large computer, two computers running at different speeds where you compare state after each step also works.
PS: The halting problem is only undecidable when running with unlimited memory.
I don't think you quite grasp how hard a problem can be for a fixed size Turing machine. The state size is 2^(Memory), naively solving the halting problem for all cases that can be solved in a reasonably-sized Turing machine still makes the age of the universe look like nothing. 2^10000000000 is nothing to scoff at.
For the Collatz problem you can simply run a (fixed version) of that program with arbitrary-precision arithmetic and it would pretty much run until you run out of memory, which is on the order of 2^RAM. For me that would be about 2^8000000000.
I think an important argument that hasn't been made is the simulation speed. If we want to make sure a program running at clock C never ill-behaves, it suffices to simulate it at a clock C'>C and halt it if and when it misbehaves. Any constant factor speedup is sufficient to manage that even in the same hardware. A problem starts to occur if C is susceptible to random mutations, with a branching factor B every T seconds. Then to simulate t seconds into the future requires B^T/t more computing power. If there's one 1-bit mutation per second, it gets unpredictible less than 1 hour in the future.
x=5
y=5
while True:
if y%2 == 0:
y=y/2
else:
y=3y+1
if y == 1:
x=x+1
y=x
if y==x:
return x
Please predict the code. I'll even give you $500 in memory of some guy named Erdos if you get it right (and I haven't even gone for a provably undecidable example! :) )
PS: Don't waste time checking x values less than 1000000000000000000
In theory this will eventually overflow even with pythons Arbitrary precision due to the machines finite memory. Though for practical cases it's going to run until the machine encounters an error.
If your asking the underlying math problem, yes that is true for all positive integers. It's much more obvious in base 3.
And this is a great illustration of why arguments that "humans are special because computers can never solve the halting problem" are specious. We can't solve the halting problem (in the general case) either.
That said, we can still see what that code does, even if we don't know precisely what code path it takes. Great example though!
OK, well we can always say that the heat-death of the universe renders all problems irrelevent, but it's a bit of a cheat in my opinion.
The original poster seems to imply that knowing the code means that you can know the behaviour of the system; I do not think that is the case, and my simple (chaotic) example tries to demonstrate this.
Well indeed, as per Turing on the Entscheidungsproblem one cannot know what some abstract bit of code does without running it, but one can sometimes put limits on its behaviour - your snippet will never do anything but print out numbers, for example, or something running under seccomp might be prevented from making certain syscalls.
If you have the compiled assembly obviously you can say exactly what will happen. I agree with your meaning somewhat though that more often than not many people have no clue, especially with large systems. Its not as bad as your statement though..
Well... the compiled assembly doesn't give you any more information than the code snippet.
The issue is not so much how this code is translated from higher abstraction level to lower abstraction level... the issue is, that this code represents a simple chaotic function (the logistic map). As such, for a simple few lines of code the behaviour is very complex and virtually impossible to predict; for certain values of the controlling number it will (1) halt relatively quickly, (2) never halt, or (3) halt after a very long time... but good luck in distinguishing between cases 2 and 3!
No, the parent is actually talking about the Halting Problem. It has been proven impossible to predict when a program will exit without actually executing the program. (except in a few trivial edge cases)
Then there is a subject called 'genetic programming', where programs are explicitly designed to mutate, reproduce, cross-pollinate, and randomly generate new code from a set of pre-defined operators. Which, when performed in a large scale, might be 'scary'.
Because embedding new species into an untested environment can be devastating. My example is the accidental import of tree snakes to Guam, which resulted in the complete extinction of all birds on the island, because the snakes ate the eggs faster than the birds could adapt. There are numerous numerous other examples of accidental release of animals destroying ecosystems.
That you know of. Have you vetted these GMOs against all possibilities of people's DNA? Perhaps unreasonable. How about testing against people who are considered "unhealthy" by the trial criteria? Nope, science knows nothing about how GMOs affect those people, yet claims they are safe.
I think there should be a ban on "owning" custom sequences of DNA, as it will eventually lead to an accumulation of most of our "biological capital" in the hands of very few companies, which could then effectively control what and how we eat (and many other things as well). Already now most biodiversity has vanished from agriculture, with only a handful of different crop species accounting for more than 99 % of all harvests. This is highly dangerous economically, politically and environmentally.
The biggest problem is they're not open source. What went into making that organism? Is there a standard for documenting this and reasonable public disclosure when relevant?
As the reverse engineering communities will be glad to tell you, software basically is open source. It must eventually be decoded into instructions to run on a processor, and while it's not impossible to fully sign / encrypt the binary blobs before they are run, it is practically not worried about for most applications. Without using specialized hardware, you can at best obfuscate your code, but you cannot make it impossible to eventually disassemble and understand.
Assume your code will be disassembled at some point by a bad actor. Don't assume that any code distributed to a client is safe or trustworthy, and expect that any data, methods, or secrets contained in that code are now public knowledge. To secure your environment systems under any other assumption is a dangerous falsehood.
GMOs are not inherently safe, they're just not inherently unsafe. It's possible to create something devastating to the environment, but it's also possible to create wonderful things. The trick is being careful to create the right things (which, given how companies work, generally means regulation).
I fully agree, which is why I added the /s(arcasm) tag at the end. Not sure if that is commonly understood here on HN. I was just saying that frequently comments that imply even the slightest criticism about GMOs get attacked as anti-science, which is absurd.
I hope, though, that you understand that this is basically a reaction to GMOs being assumed inherently dangerous and evil by the general population, by the virtue of not being "natural" enough.
I don't. I've given what I've thought to be well balanced and scientifically backed statements about GMOs being safe because of the all the testing, and have even used the same terminology that you have, saying they are neither inherently safe or unsafe, and have been viciously attacked by pro-GMO supporters for not blindly saying they are inherently safe. And in almost all cases the background of the posters that attacked me either worked in the industry or a mod of /r/GMOMyths/ or the like. The funny thing is that anywhere I post, they come out of the woodwork, as if they have bots that are constantly scanning various forums for keywords.
It's not merely an attack of people with stupid uninformed opinions. It's a tobacco industry style propaganda campaign that attacks anyone that is not 100% patriotically behind GMOs.
>If my reading of the paper is correct, the unnatural XY base pair doesn't have any function per se, but without it the organism fails to replicate its DNA and quickly dies.
That's actually differently to how I've read it - it appears to me that the XY pairing is stable and not harmful to the host and so is not degraded or removed (recognised as foreign by mis-sense DNA proof readers) but is therwise not performing any function, although some of the technical details are missing.
As to your second point, we already have highly perfected 'terminator' technology, currently in 3 forms:
1) the infamous Monsanto terminator technology, where second generation seeds would not be viable (which, incidentally, would be a significant benefit to some farmers growing genetically engineered cotton and potentially their crops, as spilt seed from harvest would not germinate)
2) in bacteria and other organisms etc in the form that you describe above - ie lack of a substance is fatal - usually this substance is some generic low dose antibiotic that triggers a genetic switch when present
3) as you suggest, removing an organisms ability to synthesise a particular amino acid and having it dependent on food
Yes very good point, I didn't want to mention Monsanto but I also think they're probably the company that is most likely to use such a technology on animals in the future, if they might get the chance to do so.
One of the proposals for prolonging human life by eliminating cancer is to engineer humans who have no stem cells and no mechanisms for repairing telomeres. Instead, such engineered humans would have to undergo massive stem cell and telomere repair therapy once every few years or so.
Such a mechanism would also do for an explanation for a Blade Runner style replicant that had an extendable lifespan.
I remember that gel! It turns out though that you would never be able to make that gel, because the distance those bands are from the top depends on being able to incorporate the six base pairs using your PCR, so the idea of banding with gaps doesn't make sense, instead you would have a traffic jam at the spot where there was an alien base pair.
I think the goal is to give it a function. But in order to have it replicate in a cell, you need to have a way of selecting for it. Biosafety through selection is an easy "add-on" because you already had it by virtue of the fact that you needed it to get the real result.
> "The other reason we don't need to be freaking out, says Romesberg, is that these molecules have not been designed to work at all in complex organisms, and seeing as they're like nothing found in nature, there's little chance that this could get wildly out of hand."
This is how the plot of every horror/disaster movie starts.
Its entirely possible that there were other self-replicating molecules on Earth that evolved first, but DNA simply out-competed them to extinction.
If we ever produce a competitor to DNA that was more 'virulent' than DNA, it would similarly out-compete DNA.
It would be the most pandemic of plagues that would affect every form of life. Even if we could protect our own DNA against attack somehow, it would attack all the bacteria that is symbiotic with us and which we need to survive.
At the risk of oversimplifying, because "even the very wise cannot see all ends."
Unknown unknowns. You can know your specific problem domain inside and out, and still have other things show up that rapidly invalidate your assumptions about the set of possible outcomes.
Definitely interesting work, and some very clever solutions to difficult problems, but to be clear, they only had a single unnatural base pair, and it was only in a plasmid.
Addressing the other comments about this leading to some crazy scifi scenario, imagine telling someone you used machine learning to identify spam emails and they started freaking out about how you're going to start Skynet. If you understand how things really work you can see it's not just implausible, it's that things fundamentally don't work that way.
Then again, nobody knows how skynet will get started. If it does happen, you can be sure it'll be done by someone who is sure that things just don't work that way, right up to the moment they are crushed by a giant metal boot.
That's true, but my argument is that we can basically guarantee that a simple neural net won't become self-aware, in the same way that we know that a linear regression library won't become self-aware. The complexity is bounded, just like it is in the experiments described in this paper.
Software systems are relatively predictable, and built on mathematical models and rules that we designed. Even so, there are still many negative unforeseen consequences of their operation.
Biological systems are relatively unpredictable, and based on rules that we didn't design and cannot begin to claim we fully understand.
Your analogy therefore strongly predicts negative unforeseen consequences.
Difficult to see how this comment, as common as it is, is anything but unscientific. Biological systems are often highly predictable in many circumstances - in fact "biological predictability" is what accounts for the general success of all modern medicine and agriculture.
If the field "cannot begin to claim we don't fully understand" things now, when do we reach the point where we suddenly "know enough to begin to do things", and why should we be listening to you about that point instead of experts in the field?
The reality is that this system is highly predictable in the way it was intended, and the changes made are understood extremely well at a molecule and cellular level. You can't come up with a specific concern with this system because you don't have a clue about it. Anyone who does cannot come up with a specific concern of this system because it is extremely predictable.
This anti-scientific Taleb-esque argument is just used by people who have no idea what they're talking about to try and speak over those who do, and is anti-intellectual and anti-scientific to its core. The OP's comment holds very well- this is no different from a layperson worrying about someone implementing an SVM on some dataset "becoming skynet, because computers aren't always predictable!" - it's nonsensical the minute you have any idea what you're talking about.
To suggest that accurately gauging the limits of our current knowledge about biological systems is 'anti-scientific' or 'anti-intellectual' goes against everything science stands for, and firmly crosses the line into blind scientism.
It remains absolutely true that we understand the software systems we build far better than we understand the biological systems that we are intervening in.
There is also a straw man in the suggestion that I have said we shouldn't 'do things' until we know more. I never said that.
I simply think that pretending that our understanding of software systems is representative of our understanding of biological systems is a delusion.
Again, the original analogy is perfect because the same statement could be made by the hypothetical "anti-software" person - this person could say "You're just appealing to the authority of these computer scientists! They don't know everything, software bugs happen all of the time!"
>To suggest that accurately gauging the limits of our current knowledge about biological systems is 'anti-scientific' or 'anti-intellectual' goes against everything science stands for, and firmly crosses the line into blind scientism.
Yes, of course it does. My point is that you are not in a position to accurately gauge our current knowledge about biological systems, and that everyone who is in that position does not agree with you.
There are plenty of legitimate dangerous and ethical issues in regards to biotech development, even in the near future (just like there are legitimate issues in regards to generalized AI) - but essentially nobody with any field knowledge thinks that on-going research in this category (along with GMO research and synthetic biology) will "accidentally cause a catastrophe, in the same way that while many computer scientists may push for caution with generalized AI, they don't think current ML methods and software will accidentally cause a catastrophe.
>It remains absolutely true that we understand the software systems we build far better than we understand the biological systems that we are intervening in.
I did not suggest that it does not. That is obviously true. However, the original analogy still holds: it really is roughly accurate to claim that the chance of catastrophe occurring from this research should be treated similarly to the chance of catastrophe occurring from ML implementations. While it is impossible to quantify the true chance of catastrophe in either case, both are so considered ludicrously low, unimaginable, and counter to our understanding of the system that experts in the field do not consider it to be a serious threat.
When I say it is "unimaginable", it is because it really is- I can not imagine a scenario where this technology results in some kind of catastrophe. It is actually easier for me to imagine a possibility for the software case, such as a predictive neural network involved in controlling the power grid that gets some unexpected input and creates some unexpected output that causes serious grid malfunctions for millions of people. While that may not be 100% "how things work", it at least seems more plausible than like, this research resulting in catastrophe.
>There is also a straw man in the suggestion that I have said we shouldn't 'do things' until we know more. I never said that.
You may not have, but many people (including here on HN) use similar arguments in order to advocate for suppressing research and progress in biotechnology (with Nassim Taleb being the most public of these people). If you are pro-biotechnology, then great!
I think you are mistaking me for someone who is anti-biotech, and your response is more about that than about what I actually wrote.
I'm not anti-biotech. I'm against using the spurious analogy with software as a way to establish risk levels. I do think that it's an anti-scientific argument, and frankly one that undermines your cause.
If as you say, only experts in biotech are in a position to understand that the risk level is similar to the software case, this proves my point because it means that the analogy is just a way of conveying the opinions of those experts without being open it.
There's absolutely nothing unscientific about saying biology is hard to understand and predict relative to software. We understand some biology, but the complexity of things like the genome, brain function, bacteria, etc. is overwhelming. (Relevant xkcd: https://xkcd.com/1605/).
Far from being antiscientific, it's a rather straightforward observation that borders on self evident. What would be unscientific would be claiming such things can never be understood, even in principle. But that's not the claim that was made.
Yes, it is a self-evidence observation that as a whole, we understand the software systems we've built more than the entirety of all biological systems. Obviously nobody is disputing that.
The original claim was that to against this research or cautious about this research "because we don't always predict what biological systems will do" is on the same order of magnitude analogous to claiming we should be against or cautious about people implementing machine learning algorithms because "we can't always predict what software will do, especially AI!". I believe that this comparison is 100% valid and useful for explaining why the original reasoning is flawed to those in computer science who are pretty uninformed on biology.
The actual "chances of something going wrong" might not be the same - it might be a 10^-25 chance that you made a mistake in your implemented SVM that, err, causes a catastrophic error that has serious consequences for humanity, possibly because "the internet is all connected, man" (if this sounds ludicrous to you, that is how the other side sounds to me on these cases of biology!) and a 10^-24 chance that this research results in some organism that, err, causes a catastrophic error that has serious consequences for humanity (it is equally difficult for me to envision this), but the point is that both are cases where we can be so confident that some absurd catastrophe will occur because we have studied .
If you have specific predictions about how on Earth this research could result in catastrophe, make them and we can discuss how possible they are. If not, it really is anti-scientific fear-mongering: you aren't making any positive claim about a legitimate possible issue, you are simply saying "I have this gut feeling that like something could go wrong with that and it could like spread and be bad", even though we have excellent reasons to assume this is not the case and no reason to assume it is, and that everyone with any field knowledge strongly disagrees with your personal assessment, and the situation really is analogous to the person worrying about the SVM.
Edit: And to further explain why this is "anti-science" - it is because you are not making any testable claims, you are simply saying "We don't know everything that can go wrong, there could be something you haven't thought of!" and using that impossible to disprove hypothesis to shut down research and development, e.g. the process of science itself.
if nothing else, at the fundamental level there aren't any ribosomes that would parse the new base pairs. Engineering a new ribosome to produce something useful given an "alien" base pair would be the trigger that could make the new base pair potentially harmful. Until that point, it's completely inert.
Can the organism replicate the XY pairs during DNA replication (and pass it on to divided cells)? I haven't seen that mentioned, only that it "holds onto it for its entire lifespan" which is not clear to me.
There's also nothing about the interaction of these pairs with existing mechanisms, like transcription (DNA-->RNA) and translation (RNA-->protein). Could the new new pair increase the space of aminoacids from which proteins can be formed?
It can replicate them, but the XY pairs are just in a plasmid and not in the genome of the bacteria. They were chosen because natural polymerases could use them.
However, there are no corresponding RNA unnatural bases, and there's no tRNA that would recognize that even if there were. So it's going to take a lot more work before you can use these for coding for noncanonical amino acids.
The paper specifically states that these unnatural base pairs were chosen because they can be handled by natural polymerases. Also, the only way to maintain a plasmid over time is to replicate it. In fact, the whole logic of the strategy is to have Cas9 target mutations to the plasmid so that loss of the XY base pair during replication would lead to loss of the plasmid, and thus loss of the antibiotic resistance gene that it also carried.
I 150% guarantee you they are in a place that is transcribed. Otherwise there would be no selection pressure to maintain it and the bacteria would kick it out in favor of one of the plain old bases.
Very interesting. I like how about half the article was about reassurance that this won't lead to out of control monsters.
Along with the recent article about pig embryos, I wonder if the not-too-distant future will lead to bizarre custom pets, like something out of the Spore video game.
When I was very young I spoke with a man who informed me in no uncertain terms that the digital age was old news and the next big thing would be bio-entertainment.
So will this actually mean anything? Can adding extra letters to an organisms genetic code actually have the capacity to change it's attributes in any meaningful way. Every biological characteristic on the planet seems to have been achieved through using the same 4 molecules, will the addition of more molecules create new characteristics or be useful in any way?
Not really; essentially all DNA "really does" is code for RNA, which in turn codes for amino acid sequences that fold into proteins. As long as the RNA and amino acid libraries are the same, it's just representing the same stuff with a different set of symbols.
In practice, it's probably going to be used as tools for manipulating organisms for experimental purposes, since scientists can do things to the synthetic bases that can't be done to real ones (or that effect natural and synthetic bases differently.)
This is what I'm wondering about, can RNA meaningfully make proteins using an instruction manual that contains molecules that it doesn't recognize? The scientists created an organism with new DNA molecules but there is nothing in the paper about whether RNA is actually recognizing and transcribing these new molecules and how the result of that looks.
My guess is that it would be considered an error by the RNA and corrected but your parent suggest that the RNA is just treating it like it would one of the other four molecules with is also a possibility.
correcting does happen at the tRNA stage and it's not as trivial as one might think[0], but the technology to incorporate unnatural amino acids using reengineered stop codons has been around for a while and presumably it will work with X and Y. Note that Romesburg (author of the DNA paper) was a postdoc in the Schultz lab, where he was originally working on this DNA stuff -- I was across the street > 10 years ago when they first got the DNA stuff to work in vitro. He got his own lab across the street and continued on the DNA stuff (but also other things).
[0] the trick is to steal a tRNA from an archaeon; scuttlebutt is that the postdoc on the paper stole the idea from the lab across the hall, which was actually working on that archaeon.
"create organisms with wholly unnatural attributes and traits not found elsewhere in nature"
I'd argue that this shouldn't add anything categorically new, just like switching from 32 to 64 bit programs doesn't change the world.
If future work can add new artificial amino acids mapped that can be specified using the new base pairs, then it would be adding something categorically new - but still seems pretty far off.
Well binary and ternary number systems can express the same numbers, they just do it in a different way. I don't think there's a good computer analogy here.
There are a number of questions that are probably answered behind the paywalled article...
* In order to reproduce (and maintain the new bases in its DNA), does the organism need a source of these nucleotides in its growth medium? (i.e. in its food?) The alternative would be to produce them itself, but that would require scarily advanced genetic engineering, I think.
* If one of these bases is in a gene, what happens during mRNA transcription? Total failure?
Point 1 at least should more-or-less ensure this can't escape the lab.
> But Romesberg says there's no need for concern just yet, because for one, the synthetic base pair is useless. It can't be read and processed into something of value by the bacteria - it's just a proof-of-concept that we can get a life form to take on 'alien' bases and keep them.
While it might not be useful to the organism, could the synthetic base pairs lead to a way to store data? Imagine a bacteria that can be used as a living, self-healing datastore.
> Initially, the engineered bacteria were weak and sickly, and would die soon after they received their new base pair, because they couldn’t hold onto it as they divided.
> Finally, the team used the revolutionary gene-editing tool, CRISPR-Cas9 to engineer E. coli that don’t register the X and Y molecules as a foreign invader.
> "This will blow open what we can do with proteins."
> [The base pairs] can't be read and processed into something of value by the bacteria - it's just a proof-of-concept that we can get a life form to take on 'alien' bases and keep them.
> ...have not been designed to work at all in complex organisms, and seeing as they're like nothing found in nature, there's little chance that this could get wildly out of hand.
To quote Jeff Goldblum, "life, uh... finds a way".
There may be a simpler two codon system embedded inside the three codon system currently by Earth life. That is when multiple codons specify the same amino acid, they almost always share the first two nuclides. So it could be that Earth life experimented with 2 and 3 and maybe even higher codon systems before settling on the one nearly all life uses. There may have been tradeoffs in protein complexity, reliability, energy requirements, etc with system that survived. Maybe just some accidental luck too.
This opens a lot of possibilities for genetic manipulation, especially in connection with CRISPR. I imagine the X and Y bases are being kept secret for proprietary reason
https://en.wikipedia.org/wiki/Dominion_(Star_Trek) http://memory-alpha.wikia.com/wiki/Ketracel-white
If my reading of the paper is correct, the unnatural XY base pair doesn't have any function per se, but without it the organism fails to replicate its DNA and quickly dies. Hence you can argue that after introducing it, the XY base pair has a vital function, as without it the organism can't survive anymore. And since the organism has no way to synthesize X & Y itself, it needs to be supplied from the outside for its whole lifetime. This, in turn, provides a neat way to build "self-terminating" organisms that depend on a steady supplement of the given substance. One could of course argue that vitamins are exactly the same, but contrary to those the artificial base pairs would not be found in the wild, so supply could be controlled more easily.