It's an active area of research but there's no consensus on why this happens other than hand wavy evolutionary biology stuff. The most interesting mechanism is paleopolyploidy [1] where the whole genome of the organism is doubled due to hybridization or DNA replication errors and from that point on the duplicates start diverging. It has occurred at least once in most flowering plants and it must have happened several times with this fern. Normally after this kind of event, the genome is paired down and duplicate genes are "silenced" in a process called diploidization [2] but if there are a bunch of transposable element, they might differentiate the copies enough to keep them before the process completes.
It is common for human cancers to be polyploid after accumulate whole genome doublings (WGD), where a tumour cells goes from being approximately diploid to tetraploid. Different tumour types have higher rates of WGD, for example, glioblastoma, ovarian cancer, and pancreatic adenocarcinoma. But what usually happens is that the tumour loses parts of the doubled genome to reach a ploidy (average copy number across the genome) of 3-4ish.
There’s a recent veritasium video on jumping spiders. Turns out some of them evolved red colour eyesight in multiple independent ways, one example involves replicating a green colour vision gene and then mutation, another one adding a filter on top of green colour receptors, forcing neurons to activate on red instead!
> there's no consensus on why this happens other than hand wavy evolutionary biology stuff
This reminds me of a comment about chess - there may be certain abstractions, lines/strategic patterns with meaningful relations "in the big", but in the small, like with endgame tablebases, small differences in state have an effect on the outcome of the system that is difficult to explain/predict because it seems to be random.
Like looking into an extremely compact fractal phase space instead of more predictable/seemingly geometrical ones.
Interestingly, warm-blooded animals (including humans!) tend to have simple genomes compared to cold-blooded ones or similar complexity. It's just much easier to get repeatable results during development when you can do all the trickiest parts at fixed temperature, a human can use a single gene to achieve what a frog needs half a dozen for.
Moral of the story, if you notice you have to deal with a multitude of states, get out of that swamp first, get some foundations right and then iterate. Applies to both biology and coding.
On the other hand, the human body is super-reliant on very nearly exact temperature regulation. A few degrees can kill us easily. Cold blooded systems are substantially less reliant on pristine conditions.
A few degrees kills many, many people every year. I didn't say a few environmental degrees, I said a few degrees of regulated temperature. All it takes is a tiny little virus to make your immune system eat itself and kill you with your own heat. That's a very good example of a system that is highly reliant on the right environment to operate properly.
> The octopuses achieve this by editing their RNA, the messenger molecule between DNA and proteins.
Lesson number two: if you cannot avoid dealing with multiple states, consider monkey patching. The result might resemble an eldritch horror, but at least it will work.
The reason for this is that chemical reaction rates are temperature dependent, and cold blooded animals need different systems of chemicals/proteins to keep them operating over significantly different temperatures.
I wonder to what degree the competitive advantage of being warm blooded consists of the smaller genome vs. more obvious advantages like ability to stay active in colder climates.
As far as we know, all complex organisms have an accompanying microbiome of commensurate species, even the most basic ones like marine sponges [1]. Plants nurture these symbionts in their roots while animals do it in their digestive tracts (mostly, both have surface microbes too that do various things too).
So, indication of advanced evolution: outsourcing some of our development to other lifeforms (e.g. microbiomes)
I wonder if someone would take human DNA and all the necessary bits and cloned one on another planet completely alien to Earth: would that human being have a bad time because of the missing microbiomes or will they somehow grow their own (I'm thinking gut bacteria and microbiomes and so on)?
Plants are tolerant of gene duplication, possibly related to the fact that their stem cells are permanently active (which is why you can take a branch tip and get it to grow into a whole plant, quite unlike the efforts needed to clone Dolly the sheep). Their development is thus remarkably plastic (so you can get trees at the snowline that look like small shrubs, while the same species grows into tall straight trees a few thousand feet lower). In contrast, gene duplication at a large scale in any animal would probably fundamentally mess up body plan development in non-survivable ways.
Plants might be under active selection for gene duplication since it does allow rapid evolution and facilitates spread into new environments:
We touched on polyploidy in genetics of course but I don't recall anything that was particularly salient insofar as resistances. In humans xploidy typically results in either excessive protein expression or compromised (≤50%) protein expression - in many cases this is fatal or seriously damaging in terms of fertility/development. There are exceptions, for instance the mammalian liver has polyploidal cells.
But plants are way different in terms of habit, think about evolving to sit in the same place for a hundred years...
These for example could have epigenetic crosstalk between their environment (epiphytic nature) and their hosts. E.g. a special chromosome for birch vs oak. Or drought vs monsoon. Given the endpoint of the species is purportedly 350mn years it stands to reason that a highly specialized and nuanced system of regulatory pathways may have emerged. Sequence data and genomics would be revelatory.
It wouldn't surprise me if there was specialized information per-host which was regulated by signal produced by the host, I think this would explain redundancy pretty well. Different epigenetic pathways operating on different x¹ chromosomes yielding differential response to discrete small molecules/proteins/hormones produced by host species which prove beneficial in the looong run. This could have a whole cascade of effects or just subtle SNP differences which yield fitness enhancements. Essentially each one being a subroutine for each host case producing local optimums.
But I'm just a scrubby undergrad so with a grain of salt. There's probably many other more reasonable explanations, it's biology, biology seeks to find exceptions to every rule by its nature.
But it is true that huge amounts of our genome don't do anything. There are sequences where the same letter or string repeats thousands of times. There are many copies of things that have accumulated mutations that make them non-functional.
Computer architectures sometimes necessitate no-ops under certain circumstances to facilitate functionality. Even though they're no-ops, they're not useless. To the contrary, they have very specific and required uses, which is why they exist.
Evolution works on integrated systems, not parts. Which goes a long way toward explaining why life doesn't seem to have any single-use components - everything serves multiple purposes. We just don't understand them all yet.
The idea that repeated sequences in certain genomes are non-functional or 'junk' is questionable, as they provide additional encapsulation to chromosomes with cell nuclei (a sort of fault tolerance).[0]
It’s my (very simplistic and layperson) understanding that the size of an organism’s genome is more correlated to how long it has been evolving, rather than any specific complexity of the organism. Since ferns are one of the oldest organisms known to science, it makes sense that their genome would be relatively large.
I don’t remember when/where I heard this, it may very well be BS.
Until a better story emerges, I'm imagining that fern mode is just what we see when it is dormant. Perhaps if we prod it in the right way it will wake up and show us its true colors.
If we count evolution as amount of genes evolved in the genotype, onions are five times more evolved than us.
But lot of this consists into redundant copies of the same information and another big chunk is included garbage borrowed from attackers. This fern could have a lot of things trying to finish it, an a lot of time to think about the problem. And of course could be also an hybrid or an hybrid of several hybrids.
Plants are considerably simpler than animals so they tolerate a lot more genetic nonsense. Crazy things like duplications which would simply result in non viable animals most often don’t have nearly the harmful effects in some plants so they survive and aren’t nearly so aggressively pruned out by evolution.
It makes sense to me. If you can't move (or can't move much) there'll be less variety in what you can take in, and some simple strategies for dealing with stressor don't work, so there is more need to be able to synthesise a wider variety of things.
Recursively escaped JSON is anything but concise, because it's not simply copied, but also backslashes are exponentially doubled at each level of recursion.
s = {"foo": "bar"}
for (let i = 0; i < 10; i++) s = JSON.stringify(s);
s
This 160 Gbp genome dwarfs the human genome by over 50 times. Yet, its complexity as an organism doesn't match this genomic enormity. This stark disconnection, a modern twist on the C-value paradox, suggests that genome size and organism complexity are far from straightforwardly linked. Instead of functional genes, the vast expanse of DNA is dominated by polyploidy and non-coding repetitive elements. This discovery pushes us to rethink the biological and evolutionary implications of such massive genomes and what drives their expansion.
Or just added redundancy for radiation resistance. There was less oxygen and subsequently less ozone when ferns first evolved, so there would be far more UV light to protect against.
As you'll often hear from geneticists these days, one person's junk is another person's treasure.
There certainly was an attitude for a long period of time that our DNA was full of junk[0], but the field has since characterized much of what we once thought was junk (i.e. non-functional DNA) actually is just non-coding DNA[1] that serves one or more of a wide array of biological functions.
In many ways, you can't really blame scientists of the 70s for thinking that much of what we now know is ncDNA was inscrutable junk. In many ways, given the technology at the time, it was.
Perhaps people should use the term "non-functional DNA" instead of "junk DNA" more often. Calling something as "junk" has unnecessarily dismissive connotations.
"We don't know what it does, but junk DNA is a real thing, and that's one possible explanation" is a lot more reasonable. And I think that's a more charitable reading of the comment you replied to.
"Junk DNA" was a terrible name. But the article really should have mentioned something about how much of it actually codes for proteins (for both the fern and us).
Hmm, I’ll need to check but the genome of the single cell protozoan, Polychaos dubium, was reported in 2004 to have a genome of 670 billion basepairs. Perhaps this was an error.
I instead think of nature as storing information in living organisms. It is indeed a self replicating database that is forever optimising ways to improve the replication.
Yes and no. Scientists throw out these sorts of terms and know what they mean but the general population doesn't and doesn't get the ramifications of the real meaning. Just see how many people think that when a scientist says "the universe" that they mean the entire universe and not, as scientists mean, the observable universe. For general, "pop" cosmology those have very different meanings and lead to all sorts of bad thinking.
> Just see how many people think that when a scientist says "the universe" that they mean the entire universe and not, as scientists mean, the observable universe
We once pestered our physics professor to explain what’s outside the universe. He finally said that’s a dumb question, the universe is definitionally everything, if we find anything beyond the edge of the universe, we’ll just call that universe too.
In many contexts, "the universe" means "the whole universe [to the best of our knowledge]". For example, when scientists talk about the age of the universe or the start of the universe or the ultimate fate of the universe, they really do mean the whole universe, not just the observable universe.
Not really, the observable universe is a specific part of the universe we know of. We also know a lot of things about the parts of the universe outside the observable universe. In time, some parts of what is today the observable universe will become unobservable (since they are receding at an accelerating speed because of dark energy). That doesn't mean they will cease to be things we talk about when we say "the universe".
Now, is it possible that some day we'll see new stars or something else coming from a completely unexpected direction, and discover that the universe also contains things that did not begin at the big bang? Sure, it's always possible, and our theories will change. But it's absurd to qualify each statement based on the possibility that new knowledge will come along at some point, when the same qualification always applies.
Observation is a cornerstone of science and theorizing about something you cannot observe is interesting but not science. This is speculation about what might be outside the observable universe and that could lead to science disproving those particular speculations, but speculation is not science, it is philosophy.
Observation is not limited to direct observation. You can take what you're seeing with instruments, and extrapolate based on known laws, and you're still doing science. In fact, this is even more important to science than direct observation, which is extremely limited. By direct observation, I can't even tell if the earth was here yesterday, or at least a thousand years ago, even less so if it will still be here tomorrow.
By the same token, when you look at certain characteristics of what you can observe of the universe, and you take the known laws of physics, you can find out a lot about the unobserved, and the un observable, universe.
While I understand there are unknown unknowns, in science you can sometimes prove that something cannot be smaller or something cannot be bigger. Because it would simply not be possible.
Not sure if that's the case for DNA.
And I am not talking about unknown unknown like some living organism using some other "substance" as the generic material.
Not ferns specifically, but I've read a simplistic explanation that plants lack behavioral defenses, so they rely on chemical defenses. And more chemical defenses requires more genes.
Because it somehow survived millions of years despite that massive inefficiency holding it back. Quite remarkable luck not getting out-competed to extinction.
> despite that massive inefficiency holding it back
I assume you are referring to the size of the genome. Has anyone been able to prove that it is causing an inefficiency? Maybe it isn't. In classical computer programming languages sometimes more code is more efficient, such as unrolled loops. That analogy may not apply here. I am far from knowledgeable in this realm.
If it's really just inefficiency, wouldn't a mutation that removes some of the surplus genome bring an evolutionary advantage? Those mutations are probably rare and the advantage miniscule, but anything adds up over a long enough timeframe
Awful summary video, in my opinion. Takes 5 minutes to get to the new discovery, then spends 3 minutes repeatedly claiming that we don’t have any explanations for the wide range of genome sizes, then 2 mins of Patreon credits to get over the 10 minute mark. There might be 30 seconds of actual content in this video.
I’d give a highschooler a bad grad on this, why do so many people give this guy money to make low quality content like this?
He could have just picked any section of this Wikipedia page and read it verbatim and he would have transmitted more information: https://en.m.wikipedia.org/wiki/Genome_size
Bad channels which’ve accumulated views and subscribers getting recommended by the youtube algorithm. Hence why he has a filler segment to get the video to 10 minutes. That’s just how youtube works nowadays.