1) While I agree with the article's assessment that superconduction along huge distances is a likely no-go given pressures involved, it's not out of the realm of possibility that we could find a way to apply massive static pressure loads to small high-performance circuits.
2) If the pressures are 75% the Earth's core, that raises interesting geologic questions about what's going on in the Earth's core. Perhaps the model for Earth's magnetic field (or the material that causes it) will need to be adjusted to account for the possibility of naturally-occurring superconductors.
> superconduction along huge distances is a likely no-go given pressures involved,
Pedantic note: they said only for this material, not a no-go in general. Not sure if you caught that.
"Strained silicon" is a technique for achieving faster frequencies in modern CPU by applying force to the lattice with another deposition layer. I wonder if properties of the lattice under pressure can be instigated either by similar process, or even during phase change when creating the material.
Off the top of my head, that sounds like you're off in the pressure requirement by multiple orders of magnitude. The pressure used for the superconductor is enough to shatter diamond. Creating strain in a lattice like that would just shear the materials apart.
Material tensile strength can be orders of magnitude higher than compression. I wouldn't dismiss the possibility of microscopic level internally stressed materials reaching such pressures.
I also know nothing about material science so it's baseless speculation
I wouldn't rule out extremely high internal pressures in static materials as a future research direction. Prince Rupert drops reach a pressure of 700 Megapascals, roughly 2-3 orders of magnitude less than the required pressure for this experiment. I'd imagine that more advanced processes and materials can already manage internal pressures substantially greater than the Prince Rupert Drop.
I can readily imagine some cross-discipline techniques achieving this. A little lithography, some selective dissolution like Gorilla glass, perhaps some tricks from the optical fiber industry, and bam, you have a fine coaxial filament which creates a region of 100's-of -MPa stress along the length of the fiber. Maybe even GPa, though that's tensile-strength-of-nanotubes territory.
Certainly it's years away but I dunno...This feels possible in an engineering sense.
>100's-of -MPa stress along the length of the fiber
It's all fine and dandy until it breaks open, along the length, and releases not just the stored mechanical energy, but also all the inductive energy from the flowing current of few million amperes.
Superconductors aren't magic. You can't just stuff a few million amps into a tiny wire. Just as there is a critical temperature, there's also a critical current density. While those two numbers are related, you'll need quite a bit more than "room temperature" superconductors to carry currents like that. And there's a temperature/current tradeoff: if the ambient temperature is close to the critical temperature, then a whisp of current can make it go normal.
1 meter of 1mm diameter wire with 1 million amperes amounts to 750kJ, or ~180g of TNT.
A superconductor blowing up across the length and distributing the energy doesn't seem as bad a whole transmission line's energy released at a point fault.
Assuming superconductors replace existing wires, it won't be more power than the wires already carry, just in a smaller area.
My instincts tell me it would actually be much safer. Overheating wire would instantly lose superconductivity and vaporize into an open circuit. The same thing a fuse does.
Current wires have enough bulk to heat up and melt slowly. This is more opportunity to burn things and set stuff on fire. Vaporized plasma is low density which probably won't concentrate energy enough to light surrounding materials on fire. Plasma also disassociates quickly in air, again way less time for fires.
For mechanical stress, it may be sufficient to just ensure the stress is evenly distributed in all directions. This way the energy will be driven into shockwave and heat rather than motion. The wires might make a big bang but this isn't very harmful underground or high in the air on lines. Explosions are way less dangerous in open air than they are confined
Put two superconducting fibers next to each other with a thin insulating layer, or many fibers in a bundle.
Have the supercurrent in one direction matched by the same supercurrent in the other direction, alternations as closely spaced as possible.
The magnetic field will be confined, at least.
When the fiber breaks, join the entwined superconductors together (equivalently: have the insulator break down), and the current in the rest of the transmission line can continue flowing.
It is hard to imagine a material strong enough to maintain residual stresses this high! Maybe in diamond? And what happens when it fractures and all comes apart? (unless it is contained in a very small volume so has bugger all potential energy)
Materials do a little better under compression than tension. But yea, 267 GPa is a lot. The working stress for most high strength metals is about 1 GPa. With that being said under shock conditions for instance.
That said you can locally get huge stresses without out breaking the material.
I'm not so sure there's geologic implications. This kind of superconductivity only occurs in very specific compounds at very specific temperatures -- way cooler than what's going on in the earth's core.
We've found a few substances that are superconductive at very low temperatures, but it doesn't have any impact on geology because those conditions simply aren't found on earth.
Not necessarily, the point of the article is that there is a high/room temperature superconducting material. That doesn't mean it is naturally occurring or that there are any, but the understanding that you can achieve superconductivity at higher temperatures when at (significantly) higher pressures could lead to some new geological hypothesis to try and test.
The earth's core is north of 4000 degrees Celsius. It's a huge leap to imagine any superconductor at those temperatures, nevermind such a precisely engineered mix of elements occurring naturally. We're squarely in the realm of fantasy here, not geology.
"Fantasy" seems a little bit harsh to me, when our deepest borehole reaches only 12,2km down. Also our acceptance of continental drift/plate tectonics is rather young.
In other words: We have a few vague assumptions of what happens down there, based on our current understanding of physics, which we base on what we could veri-/falsify by experimenting with things which are accessible or at least visible to us.
Nobody has been down there, we have no pristine samples, nor the ability to get them(for now).
That we've never made a hole that deep does not mean we know nothing. We know a great deal by measuring seismic waves as they pass through the earth. That's how we know the earth has a solid inner core, despite the temperatures.
It's fantasy given what we understand about the earth and about superconductors. That's not impossible, but like any highly improbable claim it requires extraordinary evidence to take seriously. Otherwise it's like claiming there's a teapot between the earth and the sun, or some kind of invisible deity. We've left the realm of science and the natural world and entered the world of fantasy.
I didn't claim we know nothing, just vague. I'm aware of seismic surveys, even trying to use neutrino-observatories to look through.
We can't really predict when a volcano will erupt with what force. Same for when, where and why with earth quakes. That's much more accessible and of concern to us, yet we can't. Because our understandings are vague! Got it?
I think the only way to answer that question is to do more research, as opposed to squarely assuming that something is impossible simply because you didn't think of it.
The key to why the Oklo reactor was possible was not that a natural process separated U-235 from U-238 from a Uranium ore, but that the reactor existed 1.7 billion years ago. U-235 has a half life of about 700 million years so the ratio of U-235 to U-238 in naturally occuring Uranium decreases over time.
U-235 was about 3.1% of Uranium in ores 1.7 billion years ago and is what can be used in some reactors today. Uranium ore today has only about %0.72, so an Oklo type natural reactor could not form on today's Earth. See [1] for a good description; the atlasobscura article is poor and factually incorrect in places.
But in a molten core, won't hydrogen sulfide and methane bubble out from the center, leaving only iron and heavier elements and compounds in the super high pressure depths?
Very likely. But more studies would be needed. The best phase diagram for H2S and methane that I could find only went up to ~140 barr. Though there were some interesting papers on clathrates and methane in the kilo-barr range, but I'd not think those were interesting for deep earth experiments. The deep earth is ~3.3 mega-barr, for comparison.
The universe is pretty big, and there are lots of places where unlikely things didn't happen. When the next unlikely thing happens, it most likely won't happen here.
If you ask yourself this question, then 100%. You need to exist to even consider the unlikelihood of your existence.
But as far as I know the existence of naturally occurring superconductors is a completely independent probability, so it doesn't really make sense to use one to justify the other. Are there naturally occurring superconductors somewhere in the universe? I mean, without doing any calculations I'm tempted to say almost certainly yes. Do they exist somewhere in the Earth? As far as we know, extremely unlikely.
It's pretty likely. There's at least 1,000,000,000,000,000,000,000,000 stars in the universe, most of them have planets. Billion of years passed since the Big Bang. If some chemical process can create life it's very likely that somewhere it did.
Scientists who have done the math disagree. It's a major current issue.
It's extremely difficult to produce useful proteins from random DNA chains. As in, if I took all of the atoms in our galaxy, paired each with random DNA, and allowed you to pick just one (blindfolded), only one of those DNA strands would contain the DNA necessary to produce a valid/useful protein. Literally every other atom has garbage DNA/proteins. The human body contains between 80,000 and 400,000 proteins.
DNA looks far more like "information" than it does like random bits written to disk. It's analogous to trying to find an x86 program of at least 160 instructions that computes a valid mathematical function by randomly splatting 1s and 0s to disk and then "running" the "code". Eh, maybe it'll eventually happen, but you can see how hard it actually is in practice. Heat-death-of-the-Universe hard.
Give a long enough timeframe, even unlikely things will happen. However, the Universe isn't very old—which is why it's a major current issue.
You do realize that natural selection iterates through these same sorts of garbage proteins at a rate of trillions and trillions of bacteria per year for millions of years?, and with bias toward previously existing functional structures? It's a pretty potent optimization process, given the timeframes and scale. and the uncertainty is enough. It isn't equivalent to that, because there can also be incremental progress made toward a functional protein, unlike code.
There is a question of where does the first self-replicating molecule come from, and how do we get to DNA from that, and how do we get the diversity of proteins that we see today.
Creating random DNA sequences and showing they don't produce 'useful' proteins has nothing to do with any of those questions (and how do we even know they are not useful?)
I think the main stumbling block for me is the perception of time scales. It is impossible for me to say, with certainty, any information regarding time scales of a billion years and the randomness which permeates the evolutionary process. What we see and know are the winners of the race, not the mountains of failures.
> It's extremely difficult to produce useful proteins from random DNA chains.
I think we all agree about that.
> Scientists who have done the math disagree. It's a major current issue.
The problem is that you can do the correct calculation or a wrong calculation. For example the number 160 is probably too high. Some of the current proteins have a 160 or more amino acids, but there are shorter proteins, and there are some useful short amino acid chains with only 20 amino acids.
There's between 100 and 200 billion galaxies in the observable universe. There were billions of years to do the choosing - how many times per minute am I allowed to do it?
> The human body contains between 80,000 and 400,000 proteins.
Good thing first life wasn't homo sapiens then (and probably wasn't using DNA).
The framing you have here is an attractive one, but I don't think it makes much sense in the context of reproducing molecules.
There is no reason to posit random DNA chains.
The statement that "the number of DNA chains that produce valid/useful protein in the space of all possible DNA chains is vanishingly small" seems reasonable (however I'm not sure how we would know these chains are the only ones that produce valid/useful proteins).
The idea that we need to choose randomly from the space of all possible DNA chains is not reasonable.
----
Once we have a reproducing molecule, we expect to see a multitude of valid reproducing molecules as descendants of that first molecule. We expect (at least some of) these descendants to eventually be extremely different from the original molecule, and by their nature valid reproducing molecules.
Once we have a reproducing molecule (like DNA) that creates other molecules (like RNA and proteins) we can expect the same of its descendants, and the descendants' by-products.
If these molecules form an ecosystem, where the reproduction of one relies on the validity of the other, the only succesful variations within the ecosystem will be valid variations of the ecosystem.
----
The space that we are choosing from is not the space of all possible DNA chains, it is the space of all DNA chains adjacent to existing valid chains (or chains in a valid ecosystem).
It's analogous to taking a valid x86 program that can reproduce, randomly adding/removing/mutating some bits on reproduction (with low frequency, very quickly, and in a ginormous space - think on the scale of molecules in the Earth's oceans), and asking if that new program is also valid. And then, after millions of years of this, asking if one of the programs is a valid mathematical function.
----
There are still big questions here. Questions like "how do we get the first reproducing molecule?" and "is DNA likely to arise once you have reproducing molecules, or just one out of many options?"
None of those questions give reason to evoke the number of all possible variations of DNA as evidence that the variation we see in proteins is somehow unlikely.
Once we know that there exists one valid DNA/protein system (which we do, as it exists), and we know that variations of DNA/protein ecosystems can be functional (which we do, as we've observed it), it is reasonable to expect a multitude of valid, functional DNA chains, and the proteins produced by them.
Like you, I can imagine hundreds—maybe thousands—of ways to resolve these issues.
That's hardly relevant though, what matters are resolutions that actually work.
I agree that we are (very) likely to find the mechanisms involved, but so far, we haven't. In fact, we don't even have a theory on how DNA was originally developed, or how non-functional DNA/proteins self-replicate, or really anything at all. We only have the end product (which does—as you point out—work). The question is how did it get there, and previous hand waving about a huge, old Universe and random chance isn't sufficient.
It's going to have to be something similar to what you (and other commenters) describe: mechanisms that preferentially and relatively quickly produce valid, self-replicating DNA/protein chains. To date, no one has found anything even close to that.
You see the difference between this argument and what you wrote above though?
Perhaps I'm reading your original post too strongly, so please correct me if so.
In the first post you compare the number of valid DNA chains to the space of all possible chains, you mention the number of different proteins in the human body, and you draw an analogy to a random sequence of bits forming a valid program.
None of these talk to the probability of a reproducing molecule arising through physical processes, nor do they talk to the probability of DNA as a descendant of that original reproducing molecule (or potentially multiple original molecules).
I get that you understand the gaps in our knowledge of how these systems came to be; my point is that your original argument is misleading in the exact same way you claim the argument
"Billion of years passed since the Big Bang. If some chemical process can create life it's very likely that somewhere it did."
is a
"kind of hand-wavey statement [that] seems to convince most people. Universe is hella-old, and really big. Ergo, incredibly rare stuff has happened basically infinitely many times. Life everywhere, etc."
(this was a reply to a different post, but I think it holds to the comment you originally replied to).
In fact, I find the argument that "things reproduce, and have been reproducing for a long time in a large environment, so we expect to see complexity in those things" much more reasonable than "most random arrangments of this molecule are useless, and we can see lots of useful arrangements, therefore time and randomness can't explain them".
We're discussing how to get those "things that reproduce" in the first place. I agree that once you have useful things that reproduce, it's easy to keep it going. Similarly, if I have a running copy of Linux, I can use the tools (and source code) to produce another copy of Linux.
But how do we get the first copy, the "original reproducing molecule" as you put it?
The usual explanation is that the "first copy" arose randomly, and then kept going. Do you believe that? I suspect not—but most people do.
We know that it can't have been random (which is the argument I gave, and I suspect you agree with). We should tell people "it wasn't random, something about the fundamental nature of these molecules caused better and more complex molecules to emerge." But we have no mechanism for that, just a (valid) belief that it has to be true.
I think we should find those mechanisms, and simultaneously, stop telling people that random chance + vast universe + long timespan is sufficient.
> The usual explanation is that the "first copy" arose randomly, and then kept going. Do you believe that?
I believe a variation of that.
I believe that the first copy arose through physical processes.
Evoking 'randomness' is unnecessary and misleading.
Do you not believe this?
To my knowledge, we don't yet have a mechanism for how such a molecule came in to being (though there are ideas).
We also don't have any reason to think that it must be some random single choice from a large possibility space, and we don't have any evidence at all that it could have arisen from non-physical processes (what would that even look like?).
This is what I mean by random: no DNA sequence is privileged over any other, and no (known) physical process produces anything but random DNA sequences (excluding, of course, copying already useful DNA sequences).
DNA has about as much structure as bits on a disk (with coding for one of 20 amino acids as the "bits"). No DNA sequence is more likely than any other to exist.
I think that means we need to identify strong physical processes that produce useful DNA strands; you, apparently, aren't as concerned about it. Maybe you're right, but from where I'm sitting, it's hard to imagine what those physical processes might be since the strands they must produce are extremely, unimaginably rare in practice.
DNA is basically information[0], and we literally have no example of a chemical process producing valid DNA information, nor is it all obvious how such a process might work in practice. In the past, large amounts of time + equal likelihood of producing random DNA was considered sufficient to think "well, useful DNA stands could appear randomly." We now know that's extremely unlikely to the point of being effectively impossible, statistically-speaking.
But some DNA sequences are privileged over others!
The mechanisms for producing new DNA sequences involves copying existing DNA sequences. Thus, the ones that exist are privileged over the ones that do not exist (yet), and the adjacent sequences are privileged over a random sequence.
> No DNA sequence is more likely than any other to exist.
It is far more likely for a DNA sequence very similar to my own to exist than a random sequence.
> we need to identify strong physical processes that produce useful DNA strands
We have already identified those processes! We know quite well how the machinery of DNA replication works.
If we care about the first DNA molecule to ever exist it's a very different question. We don't need to find a physical process that produces a modern DNA molecule from 'raw parts', rather one that takes not-quite-DNA and converts it into DNA.
> it's hard to imagine what those physical processes might be since the strands they must produce are extremely, unimaginably rare in practice.
Can you imagine slightly simpler DNA? Say just a bit shorter? What's the simplest molecule we might still call DNA, that is reproducing? Can we imagine machinery that would produce that?
I think it's very reasonable to think such machinery could exist, even if we don't know the exact mechanisms involved. We know that RNA can self-reproduce, and also produce proteins, so it's reasonable to think that machinery to produce RNA strands could evolve to produce DNA strands (for example).
The only involvement randomness has in this whole process are (relatively) rare and infrequent changes to self-replicating molecules, and (potentially) the initial formation of a self-replicating molecule.
It is irrelevant how many possible DNA sequences there are, or how much information is stored within them, as we know new sequences are derived from previous ones.
> If we care about the first DNA molecule to ever exist it's a very different question. We don't need to find a physical process that produces a modern DNA molecule from 'raw parts', rather one that takes not-quite-DNA and converts it into DNA.
We haven't found that, and apparently aren't even close. We don't even have any idea what something like that might look like, or even more critically: given all the incredibly, insanely, unbelievably rare DNA sequences that exist in the world today, why is such a fundamental process capable of producing them not abundant as fuck already? Where'd it go? Why is this process even a mystery in 2020? It should be ubiquitous; in fact, all of the primordial soup mechanisms should be. Certainly that's what we expected when the theory was developed, and it hasn't panned out.
Anyway, I think we've exhausted this topic. Thanks for commenting.
> We haven't found that, and apparently aren't even close.
We do have ideas! Specifically, within the RNA world hypothesis, the transition period is called the virus world [0]
> given all the incredibly, insanely, unbelievably rare DNA sequences that exist in the world today,
We have a good understanding of where diversity comes from, I'm not sure what point you're making here.
> why is such a fundamental process not abundant as fuck already. Where'd it go? Why is this even a mystery?
I don't think anyone thinks this process need be 'fundemental', though it definitely is pivotal. It only really needed to happen once, and then DNA was off reproducing and spreading by itself. That said, it looks like viruses converting RNA to DNA could still be happening today.
In general, we don't expect novel self-reproducing molecules to arise today, because they are out-competed by existing self-replicating molecules. In a world where nothing is replicating the first replicator is king. In today's world a brand new replicator is food for something else.
> Maybe it's possible that your romantic view of how this all happens (pseudo-Darwinian circa 2020) isn't telling the whole story?
I don't think I, or anyone else really, is claiming to tell the whole story - just that we have good reason to believe this came about through physical processes, and no evidence to believe... well I'm not sure what else there could be.
Interesting discussion. I’d like to ask a sincere question:
Wouldn’t a system A that is capable of encoding another complex system B, need to be as complex in order to encode all the information in the result?
It’s like a compression algorithm, you can encode the information, but the complexity level of that information is still there (also the difficulty in compressing the information increases very fast - exponentially or maybe even factorially).
So if the most basic protein sequence requires so many bits of information, wouldn’t anything capable of producing that (in a non-random manner) also require at least that level of information (if not more).
It doesn’t matter what process we call systems A and B.
So it seems if randomness doesn’t solve the problem (because math), then the only conclusion is that there is a fundamental requirement for intentionality.
It's possible for a simple thing to encode something more complex, deterministically.
The prime example is The Game of Life - simple rules from which complex behaviour emerges.
This idea of information is one we're putting onto the system, not some inherent attribute. Yes, the encoding of a protein needs to have enough information to produce that protein (or a family of proteins), but that says nothing about the process that created the encoding.
For example, a strand of RNA can be spliced in many different ways to create many different proteins [0] and this process can go weird in many ways. New sequences will arise from this process, even though they weren't 'intended' to.
The Game of Life doesn't produce complex behavior from simple rules.
The complex behavior comes from a large enough random starting state combined with a very low minimal required complexity to see something interesting. Also, even for a short interesting run of local behavior, the game never produces a stable behavior that grows in complexity beyond the initial information encoded in the random state. (i.e. if there is a bubble of cool stuff happening somewhere on the 2d plane, something usually interferes with it and destroys that pattern - like waves in the ocean, even when the energy curves combine to form a wave once in a while, they are limited and temporary).
So the Game of Life is actually an example that the system is limited to the information encoded in the initial starting state.
In the starting state there is either:
- a large enough random search space (i.e. a million random attempts with a 100x100 board might get something cool looking)
- intentionality (a person can design a starting state that can produce any possible stable system)
Yes, and useful proteins are basically the equivalent of "oscillators" or "spaceships" in the game of life. But must runs of the game of life are not oscillators or spaceships, just like most proteins are useless.
That's why the "initial condition" is so important, and why DNA is so important: without a good "start state", you get useless results—just like in the game of life.
What we are trying to find is not Conway's rules for the game of life, but this: how do we produce useful starting states (DNA) with a physical system? And more importantly, how do we create those starting states preferentially (i.e. non-randomly)?
We still need a model for how useful DNA (which corresponds to the "initial state" in the game of life) gets created. And we have no model for that right now, other than assuming unique random initial states are continually occurring and letting the law of large numbers eventually "find" winners.
For DNA, at least, it could have come from RNA (as per the link in my last post).
While I don't think the pre-biotic problem is solved at all, we have a lot more models of how it could have happened than you seem to credit - this is after all a huge research area.
For example, here is one [0], and here is a whole journal issue on the subject [1].
I found these by searching for 'evolution of DNA' and 'evolution of RNA'.
Now, these models all include some randomness, but in no way does anyone assume "unique random initial states are continually occurring... letting the law of large numbers eventually "find" winners"
The models show plausible environments where pre-biotic synthesis of RNA (or RNA pre-cursors) can occur, and stabilise.
This model you keep bringing up - randomly selecting a molecule from all possible combinations of atoms and saying 'enough time will get you one that works' - is not mentioned anywhere that I have seen. Perhaps some lay-people (of which I am definitely one!) believe it, but as you point out it is so obviously implausible it falls down on first inspection.
There are other models (lots of them!) and they don't rely on this pure randomness.
Minor side note, but most runs of the game of life actually will produce spaceships and/or oscillators, even starting from a random configuration. (Initialize a 100 x 100 box of cells randomly, and you're virtually guaranteed to get several gliders flying off of the resulting mess.)
This assumes a perfect random distribution though.
What if amino acids and proteins are in fact likely to arise naturally and in favorable circumstances?
There was an article recently (~1-2 months) on HN about a supercomputer/AI discovering new chemical pathways for part of this process, but I can't seem to find it anymore. I think it was about forming amino acids.
I'm no expert on this subject (the opposite really, I've slept through chemistry), but my experience with large-scale simulations has been that a surprising number of them converge to the same final result given the same starting parameters even if most processes within them are perfectly random. The bigger the simulation, the more likely they are to give you stable results. And the universe is pretty damn huge.
So I like to believe the creation of the foundations of life is in fact more-or-less inevitable in our universe, in turn increasing the chance of useful proteins etc. forming.
And that kind of hand-wavey statement seems to convince most people. Universe is hella-old, and really big. Ergo, incredibly rare stuff has happened basically infinitely many times. Life everywhere, etc.
Only…it's actually not that old, we have some idea how big it is (not that big, just lots of space between atoms), and thanks to computer science, we're pretty good at analyzing issues surrounding computation complexity.
And as it turns out, the DNA-to-protein pathway is much much much less likely that our initial hand waving made it seem.
I'm not saying it didn't happen, I'm saying with our current level of knowledge we have no idea how. The math based around being old and big doesn't work. So we need better math, more studies, etc. and less hand waving.
>Ergo, incredibly rare stuff has happened basically infinitely many times.
This wasn't my argument though. In fact it was the complete opposite.
I was proposing that it was in fact likely and thus pretty much guaranteed to happen in a large universe, as opposed to being unlikely but still likely given a large enough universe.
So we're working with different assumptions here.
In fairness I put my assumption way at the beginning of my post, so it probably got forgotten about by the end of it. Quoting myself:
> What if amino acids and proteins are in fact likely to arise naturally and in favorable circumstances?
We haven't yet conclusively found all of the pathways these can arise, and we continue to discover more. People just tend to assume it's pretty unlikely. I'm not so sure.
Comparing amino acids to proteins is a category error, almost akin to comparing individual x86 instructions to a full x86 Linux kernel binary. The level of complexity increase is not just in size, but it's a fundamental different thing altogether.
The amount of information (via DNA) needed to create a useful protein from the 20 amino acids is absolutely incredible.
So…finding more potential (note: not demonstrated) pathways to create amino acids ex nihilo does literally nothing for producing viable DNA strands and proteins. DNA and proteins are a totally different problem, and we've made basically no progress at all, and the more we look at it, the less likely it seems.
And then people (not you per se) hand wave about the size of the Universe to explain the problem away. I think we should instead accept the problem exists and work to solve it.
----
Separately, we have no known examples of any natural process producing what we, as humans, would call "information." DNA is much closer to information than any other concept, to the point where if we were sent something similar to DNA from space in, say, a radio transmission, we would absolutely assume intelligent life had made that transmission.
That is, with our current knowledge, it takes something vaguely "intelligent" to product the kind of information we have in DNA. Maybe such processes exist, but this is an absolute far cry from producing amino acids from chemical precursors, which are not information-like at all (and thus, it is unsurprising that we can do it).
I have found your comments on this thread very intriguing. The computational analogy applied to DNA and proteins is apt for me. Also, this strikes me as a potential resolution to the Fermi Paradox. What do you think?
Well, they're obviously related in that we really need to discover/determine how useful DNA came to be, starting with just the primordial soup. If we can get more accurate numbers for the Drake equation, that would certainly go a long way towards explaining the paradox.
I brought it up on HN because relatively few people seem to know this is still a problem, and progress on resolving it has been slow.
The many worlds interpretation of quantum mechanics increases the combinatorial space to play in for some otherwise unlikely seed event by tremendous amounts.
Also, we know stuff like the smallest observed polymerase, but we don't know what the smallest functional one would be that could have into it.
We also have self-replicating pure RNA systems, though the components aren't abundant. But this is just what scientists came up with in one effort trying to make one to prove it is feasible:
But why assume a leap directly to proteins, by definition a long chain of amino acids? Couldn't we have started with self-replicating peptides and incremental improvements?
Peptides are just short proteins, and no, we have no idea how to get them either (though it's obviously easier).
Also, it's not that what I've called bad/garbage DNA doesn't produce proteins, it's that the proteins produced are useless: they don't "do" anything. There's no obvious reason why DNA "extension" should produce useful proteins over un-useful ones, at least, no mechanism that we have discovered so far.
Instead of accepting a theory of incremental improvement that "sounds nice", waving our arms about random chance and an old, vast Universe and going "yup, that's how it happened!", let's try to develop testable mechanisms and validate them.
I'm asking for more rigor while simultaneously shooting down "random chance", "plenty of time", and hand waving about the Law of Large Numbers. We've done the math and we need far more effective, directed mechanisms than random chance to produce useful DNA sequences.
Fascinating. And of all life on Earth, how did human consciousness arise?
We're basically virtual machines/entities/minds stuck inside biological bodies, and the majority of us are at odds with nature and every other living organism on Earth.
I guess I'm more interested in how that happened than how life started, both seem equally incomprehensible to me, though.
Large number comparisons are difficult for humans to comprehend.
If you simplify life to a DNA strand 256 nucleotide long (for the sake of math comparison) - then the search space is 4^256. To comprehend how large a search space this is watch 3Blue1Brown's explanation https://youtu.be/S9JGmA5_unY?t=38
Maybe humanity will be able to check the entire universe for life. I love imagining that scenario and wondering what the reaction would be when we find none.
This comical notion has always struck me as scientism at it's finest. There is nothing "pretty likely" about it considering a) we have no idea how abiogenesis occurs, and b) we have literally zero evidence of any form of life outside of our rock.
the second sentence does not support the last one. The last one is independent and known as the anthroposophic principle. I.e. even if it was extremely unlikely by some measure, it still happened. Whereas there's no indication whether 10e24 stars were a number far past the goal post, or relevant at all on your back of the envelope.
It rather seems that you (and the 4500-6000°C comment) were commiting to a fallacy of large numbers. You might as well write a friggin' fantastillion, unimaginable, zomg!, and you wojld still convince roughly the same gaillion number of people. But it's good to hear the details.
4000-6000 doesn't sound much at all in years for me for example, but it used to.
For what it's worth, the classic Drake Equation is that old "back of the envelope" calculation they put together to try to ask this very question - what's the likelihood life evolved?
The problem with the drake equation is it had several variables for which we had no flippin idea what the values were. For example, supposing there are a bajillion stars (we know that much), then we have to multiply against how likely it is for those stars to have a planet - and at the time, the likelihood of planets was completely unknown.
That at least, is something that's changed in the last decade, thanks to new telescopes. We've addressed one of the Drake Equation's big unknowns: we now can hazard a guess that planets are extremely likely.
Sadly there are enough other unknowns that we still can't make any sort of conclusions, but at least the betting odds are going up.
well it is, but it's an accurate response to somebody already engaged in it. It is the GP who finds it "probably unlikely" without any indication of actual probabilities, chiefly rounding down from a haphazard guess, after all.
One of the interesting conversations about a new technology is figuring out the 'ladder' of applications for the tech as more or larger versions become available in a particular price class.
Portable MRIs are one application, once you can make a big enough chunk of the stuff, but earlier than that, couldn't you use small pieces of superconductor in communications equipment? Power supplies, ranging from IC power regulators up to mains power transformers?
While I agree that it is an interesting finding, the lede of "at room temperatures" buries the fact that using a median of 330-360 gigapascals as a measure of the earths' internal pressure equates to a pressure of 8223639745.0093 pounds per square foot. (Rough calculations based on 3,300,000 to 3,600,000 atm for the Earths' inner core)
[Edit to correct the maths] = 5325785739.6251 pounds per square foot.
Couldn't the pinch effect be used to crush two superconducting filaments into each other? My rough calculation is something like 5 MA(in each conductor, spaced by 1mm) would do the trick, assuming that much current and B don't interfere with superconductivity (I have no idea).
Looks like the highest current ever achieved is 100 kA. A mere 100 fold increase and we should be good to go.
Current and magnetic field are damping effects on a material's superconductivity. Breach the conductance threshold in particular and you get catastrophic material failure
This wouldn't exist on our earth, but there are superconducting neutron stars out there (at pressures far greater than we can produce on earth). I wonder if this has any implications for those.
GP was talking specifically about the approach in the article, which requires massively high pressures to sustain, not superconduction in general. I don't think the required pressures are really practical for any of the things you listed.
Sure, it's not ready for those use cases yet, but the fact that it has been achieved at room temperature is a milestone, and those use cases are things that would benefit from room temperature superconductivity.
No, it is not. Even though the goal at "room temperature" is technically met, it is misleading, because the goal "room temperature" meant superconducting under quite normal conditions to be of practical use.
And when you have to apply that immense pressure it means we did not really come closer to superconducting in the normal world.
Still a success, yes, but probably not a milestone, unless that discovery leads to other findings, but I do not see anything indicating such.
You can do this today and if you get energy for free, then the transportation-losses are of little significance (they are just a few percent to begin with).
Superconductors won't play a role in energy transport until their installation and operational costs over their lifetime are less then the installation, operational costs and losses of legacy conductors.
To be more precise, losses are exponential with regard to distance. “A typical loss for 800 kV lines is 2.6% over 800 km”[0], which would be close to 60% losses between antipodes[2].
I think that’s still plausibly worth doing, given renewable prices[1], but it’s not great if you can avoid it.
For the metric people, from the actual paper abstract:
> Here we report superconductivity in a photochemically transformed carbonaceous sulfur hydride system, starting from elemental precursors, with a maximum superconducting transition temperature of 287.7 ± 1.2 kelvin (about 15 degrees Celsius) achieved at 267 ± 10 gigapascals.
It absolutely boggles my mind that a science web site, reporting on a science breakthrough would report the result in such antiquated units. WTF, people?
Maybe this is a crazy idea. If we put superconducting cables around Mars at say +/-50 degrees latitude, can we create a planetary magnetic field to prevent atmospheric removal from the solar wind? Would the atmosphere start to thicken?
I posed this to an EM friend and he estimated 1,000,000 Amp-turns would be required. Never checked his math but that current seems plausible with a good superconductor, plus it's cold on Mars!
You don't need to recreate a full planet-sized magnetic field for that, you can more feasibly put a much smaller dipole at the L1 Lagrange point that will deflect the solar wind sufficiently so that it avoids Mars.
Mars doesn't need a magnetic field, without it it takes hundreds of millions of years to lose its atmosphere. It's probably much easier to just top it up a bit every few million years.
The magnetic field could help shield Mars from radiation. This recent paper on building an artificial martian magnetic field with a few thousand kilometers of superconducting wire looks fun.
Not really. We needed to industrialize to get the point we are now where we can manufacture clean energy. Coal and oil got a humans here from their simplicity (burn it) so that we can now have clean alternatives like solar panels and wind turbines.
So no, cleaning up isn't easier or we wouldn't be fucked right now. By the time we get to Mars as a colony we will, by necessity, have the tech to produce clean energy and will not be able to rely on oil. Thus starting fresh without polluting from the onset - something that was impossible on our own world.
Escapism can be a precursor to failure too. I'm not being cheeky. I think that we're not open enough about the fact that we're jumping ship, because we're not sure we can take care of this one. That's important, because it carries serious concerns for how well we'd do on Mars.
I see it not as escapism, but as steps to learn how to take care of a limited resource. Large-scale geoengineering will be necessary sooner or later on Earth. However, it almost certainly has failure modes that we don't know about, and won't know about until we can experiment with it. Testing the effects on Earth, with nearly 8 billion people, is wildly reckless. Testing the effects on Mars or Venus, though costlier to implement, has the advantage of not risking those 8 billion lives.
One thing to watch out for though, is that it's only half of the rationale behind escapism: we're concerned about our own stewardship of the Earth, but there's a very real concern that a disaster could happen to it that's not of our own making. An asteroid, for example.
Mars would protect us from several categories of these, and becoming multi-stellar would protect us from several more.
That was what i've always think as well. If theres a problem with our culture, that we keep passing through generations, we could even buy more time if we escape, but the problem might go with us.
If we dont fix the problem in the core before we colonize other planets, we will become a interplanetary virus working as a parasyte and killing our host with time.
Hedging is not a bad idea. We clearly have the resources for it. And to make matters worse, every day we are discovering something new about the instability we are wreaking upon this planet. Why on earth would you argue against hedging in the situation we are so badly ignorant we might see a planetary collapse within a minor variation sufficient to wipe us all out?
Fair. But I think there is a lot we can learn about sustainability and what humans really need by putting them on an empty planet with zero natural resources except the minerals in the ground and some frozen water.
Mars will build up to sustainability, while on earth we try to cut back to sustainability.
I think the idea of "jumping ships" is silly, this is technically impossible in the near future. But urgency of expanding existed long before any recent events. To survive we need to spread.
This is a major life altering question. If we can't learn to all live together peacefully, learn to help each other solve problems here on earth; what makes anyone think that we will survive in space & beyond...
I love everything about space exploration, but I'm not naive enough to believe its the solution to our problems here on earth. One might argue its a distraction from our ongoing global humanitarian crisis.
People everyday are dying from lack of food, water, shelter, etc...
What time & money is spent on solving the galaxies mysteries, could be brain power backed capital used to solve our dire terrestrial affairs. IMHO...
It's a bit of a false dichotomy I think. Injustice causes our global humanitarian crises and while rocket scientists are very smart they're probably not the best people to solve corruption and injustice.
There are 7 billion people on earth. That gives us a bit of leeway to multitask. We can have activists and rocket scientists solving different problems.
> What time & money is spent on solving the galaxies mysteries, could be brain power backed capital used to solve our dire terrestrial affairs
This is often repeated but makes no sense at all. The time and resources humanity as a group spends on those activities corresponds to 0.1% of our output. Infinitely more is wasted on mundane stuff like manufacturing cars, golf carts, office jobs or reading online forums.
> "In 1970, a Zambia-based nun named Sister Mary Jucunda wrote to Dr. Ernst Stuhlinger, then-associate director of science at NASA’s Marshall Space Flight Center ... Specifically, she asked how he could suggest spending billions of dollars on such a project at a time when so many children were starving on Earth. Stuhlinger soon sent the following letter of explanation ... later published by NASA, and titled, “Why Explore Space?”"
I think you started great, but didn't follow through on your own thought.
"learn to help each other solve problems"
The most important problem we need to solve, is how to survive in the universe, where any large rock falling from the sky can wipe out our civilization, if not the whole mammalian branch.
We don't have to abandon efforts to improve human life on this planet while trying to expand to more than one.
The root causes of many (but not all) of our major problems are political or social in nature and can't be solved by throwing money or engineers at them. Also, there are many people on Earth, "we" can work on multiple problems simultaneously.
Are you asking if we can bump the existing magnetic field dynamos into stability? It's an interesting idea but given the size and power of earth's natural field, I'm pretty sure that the math would work out to more energy than all of earth's resources could provide or something like that. You'd be literally manipulating the core of the earth.
Does anyone have a key understanding of why these extreme pressures enable superconductivity?
I'm trying to imagine how these extreme pressures would modify bond angles, nuclei spacing, and constraints on motion. And also tring to understand how that's affecting the behavior and creation of the Cooper pairs.
Also a handwavy explanation aimed at people who aren't familiar with a lot of the concepts of condensed matter physics. Please salt with the knowledge that current theory can't fully explain how high temperature superconductors work. And that I'm not an expert in the field.
First concept, virtual particles vs real particles. When we talk about "an electron flowing through metal" it is not actually a single electron. As it moves, the electron will move into an atom, another gets knocked out. But in aggregate it "acts like" a single particle with possibly different properties from a real electron. For example it likely has a different mass. A virtual photon will travel slower than a real one. And so on.
Virtual particles can even correspond to things that aren't particles at all! For example sound is a wave, and quantum mechanically is carried by virtual particles known as phonons. These act exactly like any other particle, even though they are actually aggregate behavior of lots of other things!
A Cooper pair is a pair of things (eg electrons) that are interacting enough that they have a lower energy together than they would apart. Electrons are fermions, with half spin. They have a variety of properties, such as the Fermi exclusion principle. A bound pair of electrons becomes a virtual particle with an integer spin. Which makes it a boson, which behaves differently.
Superconductivity happens when charge is carried by bosons.
In high temperature superconductors, it looks like the electrons are at least partially bound by interaction with phonons. The high pressures change the speed of sound, and therefore change how easily Cooper pairs form.
Everything that I said above was based on what was known a couple of years ago.
However https://phys.org/news/2019-04-mechanism-high-temperature-sup... claims that there is now a theoretical explanation for high temperature superconductors, and the best guess above doesn't seem to be the real explanation. The real explanation being that the feature/TIQ-7651_unique_schema_version
Remember what I said about particles having a different mass moving through materials? The binding together of electrons through interaction with phonons seems to depend on the mass of the electrons. When you squeeze the lattice, that mass decreases.
>In high temperature superconductors, it looks like the electrons are at least partially bound by interaction with phonons. The high pressures change the speed of sound, and therefore change how easily Cooper pairs form.
Interesting. Do we know if it possible to disrupt superconductivity with sound at just the right frequency? And the converse, has anyone tried to enhance superconductivity by using sound (i.e. increase either the critical temperature, increase the current density, etc)?
HTS will stop superconducting once a certain amount of energy is added. This energy can be in the form of heat, magnetic field, electric current, or mechanical strain. If you keep the HTS colder you can accommodate more of the other forms of energy. I do not know if sound would disrupt superconductivity but since sound is a form of energy it is very likely.
Like another poster already said, both lower temperatures and higher pressures confine the movements of the atoms, so either of them can cause the same phase transitions.
Besides this new example with superconductivity, there are other more familiar phase transitions with the same behavior.
For example, with most liquids, in order to solidify them you may either cool them or compress them.
The same if you want to liquefy gases, either cooling or compressing has the same effect.
Room-temperature superconductivity at very high pressures has been predicted many years ago, but it is very nice to have an experimental confirmation.
Handwavey explaination: the particles pair up because of vibrations in the crystal. It's modeled like a bunch of metal balls on with springs between them and you can imagine tapping one end and sending a wave of vibrations through. However, these springs are a bit non-linear and so I imagine that if you pack the atoms closer together then you will change the spring constant.
The other knob you can use to change the vibrations is the mass of the balls. This can be done by using different isotopes of the same element and the critical temperature goes down with mass.
the particles can't pair up, because equal charges repell. That's still the virtual model.
I don't quite remember my intro to electrical components, though it's a quick read for the basics. The GP obviously knows about atom models and band gap.
The paradox bit is that, as far as I can tell pressure is roughly equivalent to heat, and heat equals decreased intrinsic conductivity. But if I imagine that high preasure restricts the absolute motion of particles, that would equal decreased resistance (like an idealized fixed suspension for your swing, that doesn't take energy out of the system).
Since Hydrogen is involved, I suppose there's a channel of Hydrogen rumps without any electrons, and the high preassure is needed to keep the hydrogen from moving apart and recombining outside the ensemble. Surely this involves some form of entanglement? Which I imagine as a kind of clockwork, all cores spinning in unison.
Type I superconductors (the ones people understand) happen because electrons pair up.
The equal charges participate on the problem, but do not stop the electrons from pairing up. There is a lot of virtual particle exchange between them, but that's how forces happen. It's more correct to say that the crystal mechanically constrains the electrons into pairs than that the electrons pair with virtual particles.
(IANAP, but this one topic I have studies a little.)
I have a different explanation. Think of a material as a sponge for heat. When I squeeze the material, I raise the temperature, and that causes heat to leak out. The temperature of the material doesn't really tell me how much heat is in it, so this experiment is suggesting that it is the heat itself that prevents superconductivity.
Now a superconductor is just a conduit for electrons that doesn't generate heat. We know from Landauer's principle that heat is only generated when you destroy information. If I take a pair of entangled electrons, those electrons contain exactly one bit of information (in the von neumann sense). If I cannot add energy in excess of the energy required to disentangle them, then that bit of information is never destroyed.
Whether or not a given interaction between the electron pair and the substrate has enough energy to disentangle them is not a function of temperature, it is a function of the actual energy that may be imparted to my pair. Which is proportional to the actual heat in my material, rather than its temperature.
I wonder if this has implications for fusion. In fusion, you have trememdous pressure outwards from the containment vessel due to the magnetic field "squishing" the plasma to a density that's enough to promote fusion and redirect scattering forces back inward.
Of course to create that magnetic field, you have to have superconductors very close to this superheated plasma. So the first thing to relate to this is there may be less cooling required.
The second thing is, and this is both a stretch AND possible a huge gain, but perhaps the required pressure for the superconduction can be provided by the inherent pressure of the fusion reactor core.
That's an interesting idea. I wonder if that couldn't be used to bootstrap a room-temperature superconductor: cool it down, start a magnetic field, and let the magnetic field reaction compress the material so that you can let it heat back to room temperature. Could an external or room-temperature field also possibly be enough?
Yeah you'd still need a bootstrap to get the reaction going, and therefore generate the pressure. The question is, does this material have two superconduction modes: one at low temperatures and pressures, and another at high temperatures and pressure?
The idea would be to create a superconductor with a pressure/temp curve that is amenable to the pressure/temp curve of the starting sequence of a fusion reactor.
> Over the course of their research, the team busted many dozens of $3,000 diamond pairs. “That’s the biggest problem with our research, the diamond budget,”
> “It’s clearly a landmark,” said Chris Pickard, a materials scientist at the University of Cambridge. “That’s a chilly room, maybe a British Victorian cottage,” he said of the 59-degree temperature.
Sounds like prices for gem cut diamonds, which brings in the whole DeBeers monopoly pricing. I wonder why manufactured or rough cut diamonds couldn't be used.
Having worked in materials science research, the problem is that you generally need a very bespoke specific thing crafted for you by a professional lab supplier, and that is expensive.
From what I understand, the issue with lab-grown diamonds is that they can't really grow them beyond a certain size at this time. I think clear ones are a couple of carats, and colored ones are roughly double that. I could be off a bit. Regardless, that's not huge, though I don't know what size they require. Maybe it's sufficient.
Don't forget neutrons! Not as quick to measure, and not so good with very small samples, but well-suited to combinations of extreme environments beyond just pressure, such as temperature, magnetic field, voltage gradient etc. https://www.isis.stfc.ac.uk/Pages/Pearl.aspx
Amusing to see for an x-ray crystallographer that the neutron scattering coefficient for tungsten carbide is actually lower than for pure carbon. Neutrons are weird.
My hypothesis is that they have imperfections in them that lead them to be structurally weaker than ones crafted by the natural pressures of the Earth.
This is actually not true! This was was true maybe 15-20+ years ago but since then lab diamonds have gotten very pure. So pure that now Big Diamond markets their flaws as "natural characteristics" that make their diamonds unique.
Pretty funny if you ask me.
That being said, lab diamonds are not necessarily that cheap, depending on the dimensions and qualities necessary.
Yeah but they still have defects. There is no such thing as a defect-free material, it's thermodynamically unstable. Under stress, the defects (voids, dislocations etc.) lead to crack propagation and the diamond is kaput.
Lab-grown are better than natural in the sense that they have a lower density of defects. But they still have defects which means they will break under sufficient stress. It doesn't matter if the diamonds they source is lab-grown, it will still break under sufficient stress.
I'm disputing the notion that in order to prevent the breakage of the diamond anvil cells that the researchers used, they should source lab-grown diamonds. I'm saying that those will too break because they contain defects. Lab-grown is better, but still not defect-free.
Saw the title and thought "oh, I'll bet it is at some insane high pressure or some other exotic condition". Clicked through to an image of a diamond anvil. Not disappointed.
I'm actually really happy they put the catch at the top of the article.
It's so annoying to read science articles about how X will revolutionize Y, but you have to dig through the comments section to find out why it won't work.
Most new research findings only have very specific applications. It's only groundbreaking when something can (eventually) be implemented in real life for a reasonable cost.
Quanta does some of the best science reporting. They have a knack for making highly complex and technical concepts accessible to the general public without sacrificing accuracy.
They don't get points for putting it at the top of the article - all they're doing is correcting their own misleading title.
And yes, I say misleading. Technically true but misleading, because their omission is absolutely critical to the nature of their breakthrough and as you implied, anyone who knows the first thing about room temperature superconductors will want to know if the material has a drawback stopping it from functioning outside of a strictly lab setting.
I wonder whether it might be possible to create a material using this and carbon nanotubes worked through it. With the idea that the nanotubes can create and hold the pressure for the superconductor to operate.
This was achieved in a diamond anvil cell, a sample smaller than a millimeter is squeezed between two diamonds in a special apparatus. This is how you achieve world record high pressures, not even remotely in the realm of possibility for engineering a material.
The bulk modulus of superhard phase nanotubes is 462 to 546 GPa, even higher than that of diamond.
Engineering a wire under pressure whose whole length is compressed by a structure made out of carbon nanotubes is clearly difficult, but seems theoretically possible. It is very likely beyond our current engineering capabilities. But in principle it is a technology that we could try to develop.
For example at low temperature you assemble a wire that has a high thermal coefficient of expansion down the center of the wire. Then the superconductor around that in a ring. Then a carbon nanotube sheathe around that which traps things. Then when it warms up the core squeezes the superconductor against the sheathe and you get the pressure.
Maybe if you’re writing science fiction or have a time machine, not if you’re an engineer. For starters the bulk modulus is about deformation, not strength. Second issue is that you’d be creating explosive cable. Go watch some youtube videos of tempered glass exploding and then imagine what a material under 1000 times the pressure would look like in failure.
The whole excitement about room temperature superconductors is getting rid of the difficulty of cooling, the difficulty of this pressure requirement is easily much worse.
Eh, similar (not quite as impressive) but high-temperature superconductivity had been observed with sulfur+pressure before, it's unclear how you would make that practical.
Hopefully as we pile on more and more examples of different materials exhibiting superconductivity, we can understand it well enough to find a practical high temperature one.
Though I think we could go a long way with the liquid nitrogen temp superconductors now on the market. It's still going to be a real chore to design around, but it's got to be a lot easier to deal with liquid N2 than liquid He.
It's not far off. Engineers see if it's doable for a given budget, physicists show and analyse that it's possible at all.
Similar to a CS paper showing a new algorithm e.g. sorts with x% less swaps than quicksort, it might not actually lead to a performance increase on real hardware.
Can someone much smarter than me clarify if this is supportive of, or related to, the US Navy patent from several years ago for Piezoelectricity-induced Room Temperature Superconductor? [0][1]
This finding is real science. The patent you cite is unrelated and 100% bullshit.
For example, it states preposterous sentences such as this:
"The fact that the fine structure constant can be expressed as a function of (2e) shows how important the notion of electron pairing is in the composition of the Universe, and gives credence to the theory that the fundamental cosmic meta-structure may be thought of as a charged superfluid, in other words, a superconducting condensate."
This guy is a scammer who was able to bamboozle his patent attorneys, presumably he gets some incentive to publish patents?
Is that preposterous? The idea that "spacetime"(?) may have superconductive properties doesn't seem facially outlandish, but again I'm not a fancy scientist. He is an awarded US Navy scientist though, and these patents were specially requested by Navy brass, so I'm not inclined to think he's a scammer that slipped one by his attorneys.
I have met someone who got a stupid patent — IIRC it was a two bit binary adder — because the patent lawyer messed up what he sent to them.
He didn’t check before it was filled because he didn’t care (the point of the patent was “we needed a patent protected system to be granted a license to a codec”), and it was granted anyway.
If you look up the history of this patent, the first few times it was submitted it was denied, then some US general comments and basically rubber stamps it through.
From what I can see using a very basic understanding of superconductivity, in that patent superconductivity is achieved by:
1) Taking a wire, and mechanically inducing a wave of lattice vibrations
2) Firing a pulse of electricity down the wire to "ride the wave" of superconductivity produced
whereas normal superconductors (including this one) produce superconductivity because the first positively charged electrons in a wave of current "pulls up" the negatively charged lattice behind it as it passes over creating a wave that attracts the second wave of electrons traveling behind (which in turn does the same to the third.)
(1) Please read the terms of the data sheet carefully and consult your local engineer before using. Certain restrictions may apply. 0 ohm not available in all jurisdictions.
There is no such thing as a pure zero resistor in this Universe. A resistor made from a single atom it would still require energy to move the electron. It's how physics works in this Universe. Hence no, you'll never have a zero Ohm resistor.
267Gpa isn't unreasonable to achieve 'in the home'.
For example, if you had a rod of the superconducting material 1mm in diameter, and you wrapped it tightly with a strand of Kevlar (tensile strength 3.6GPa) until it became 100mm diameter, then the center would have a suitable pressure...
This doesn't seem right, how does a rod go from 1mm in diameter to 100mm in diameter under pressure? Also, how does one "wrap tightly"? The material would fracture under the stress of the first wrap.
You're right - I assume that kevlars spring constant is << the rod's.
If that's the case, you just set the tension in the thread so the Kevlar is near its breaking point, and start winding, like winding thread onto a bobbin...
Not being a mechanical engineer, I don't have an intutive feel for how the additive nature of the pressure develops as you describe. I would have thought that the outer layers would start to compress the inner layers, relaxing some of their stress. Is the a mech-e 101 type of link you could pass along to help get me up to speed? Or maybe you are saying that 3.6 GPa times the ratio of 100 mm to 1 mm gets us to the 360 GPa mark? So you don't need to have layers of Kevlar as your "anvil", you could use something else more rigid, and just wrap one layer of Kevlar around it and develop the needed pressure. Thanks.
This is true, however in this case the tensile strength of the casing needs to be focused into a small area and that is done by translating the pressure from a large area into a small one.
Room temperature plus one atm pressure is a separate term, STP - "Standard temperature and pressure".
(Of course it's hilariously not "S" at all, because no one can agree whether the "T" is 0 deg C or 25 deg C or somewhere in between. But when we're talking about room-temperature superconductors that's not really a big deal.)
Okay, should we be talking about "STP semiconductors" then? I doubt anyone envisioning 'room temperature superconductors' was envisioning a diamond anvil.
> As the temperature of a superconductor rises, however, particles jiggle around randomly, breaking up the electrons’ delicate dance.
hasn't this pretty much always been the crux - to fixate the particles?
while it's a nice achievement experimentally-speaking, is it really that surprising that materials immobilized by enormous pressures but at elevated temperatures exhibit the same behavior as if they were immobilized via chilling to near 0K? either condition is impractical to attain outside of a specialized lab.
I'm wondering, doesn't the concept of temperature change with pressure? For example, homemade fusion reactors operate at low pressures but at a temperature higher than the sun. So, if this circuit is operating at a high pressure, then isn't surrounding room temperature relatively low? At the mechanical level, a substance held between carbon diamond anvils isnt free to change momentum or kinetic energy due to the kinetic impacts of room temperature air, vapor, or plasma.
Second question, do virtual particles have the same Casimir effects in this apparatus, as we would see in low pressure experiments? If you're interested, also checkout the results published recently on measuring the Casimir force. Reference: “Casimir spring and dilution in macroscopic cavity optomechanics” by J. M. Pate, M. Goryachev, R. Y. Chiao, J. E. Sharping and M. E. Tobar, 3 August 2020, Nature Physics.
DOI: 10.1038/s41567-020-0975-9
There was a previous article posted on HN about a new record being set for speed of sound. The medium that transmitted the sound was high-pressure hydrogen. I don't know what rabbit hole I stepped into, but it let me to an article about solid hydrogen acting as a metal and becoming an awesome super conductor at room temperature.
I thought we had reached this record with solid hydrogen. But I cannot find this online anywhere. The material this article goes over is hydrogen-carbon-sulfide. The previous record for superconductivity was using hydrogen-sulfide.
I wonder what other materials can be added to lower pressure at room temperature and maintain superconductivity. Lithium? Nickel? Copper?
Metallic hydrogen is predicted to be a room-temperature superconductor. And if my understanding is correct, it is related to the speed of sound ("phonons") in that material. Stressing the material further increases the speed of sound.
However, metallic hydrogen is pretty much theoretical at that point, although it looks like we are getting closer. No other material that I know of had achieved >0°C superconductivity before.
Temperature was one way to change the properties to enable superconductivity, pressure now being another.
So in effect we have shifted the - can we get superconductivity at room temperature towards - can we get superconductivity at room temperature and pressure.
Whilst many will view the whole advance here is not going to make nice long wires, it does show that for some small devices, there is another avenue beyond changing just the temperature.
> So in effect we have shifted the - can we get superconductivity at room temperature towards - can we get superconductivity at room temperature and pressure.
Not really though, right? We all already own things under higher or lower pressure than normal and they can be kept that way essentially indefinitely. Light bulbs, hard drives, thermoses, etc. On the other hand keeping something below room temperature - especially really far below room temperature - takes constant work.
In theory you could make a cable that keeps the conductive material under high pressure, but you're probably not going to have any luck attaching an air-conditioning unit to your cables.
I'm completely convinced that future superconductors and stronger magnets will lead to desktop sized fusion generators.
It's so exciting because there's no obvious limitation to why it wouldn't work. If superconductors improve at current rates we'll just end up there in 30-40 years naturally.
To think in another generation humans might have cheap limitless power, it's tantalizing
When you increase pressure aren't you also increasing temp? So even though the ambient temp may be 59 isn't the localized temp where the atoms are being crushed super high? Which is also the opposite of how i thought superconductors work, i.e. the liquid nitrogen dumped on the superconducting semiconductor experiment that phys. prof's love to do.
I realize this is probably a dumb question, but the bit about combining hydrogen and carbon made me wonder: has anyone tried just, like, compressing high-density polyethylene and seeing what happens, maybe with some sulfur or whatever else involved? If you want hydrogen and carbon in close quarters, I would start there.
"With this kind of technology, you can take society into a superconducting society where you'll never need things like batteries again," said co-author Ashkan Salamat of the University of Nevada, Las Vegas.
Didn't know superconductors can be used for energy storage! What kind of energy density are we talking about?
I wonder if this could benefit from a “magic angle” or some other structural layout that makes quantum effects possible. More and more advancements seem like they would benefit from precise atomic manufacturing, that could be the key technology that unlocks all of these futuristic applications.
I guess between economics, room temperature and normal pressure Silver still stays king of conductivity just like it did over a century ago when modern electronics was born with first high voltage vacuum diode.
room temperature in a diamond anvil doesn't sound as useful as plain room temperature but i guess having a kitchen variety refrigerator instead of liquid helium makes for cheaper experiments :)
This is some nice work, but doesn't really pave the way to what I'd call "practical superconductors".
Of course, the material studied here requires some of the highest pressures we can produce and so we can't deploy the technology. More importantly IMO is the fact that we were already expecting room temperature superconductors with this route. There is a theory of superconductivity which accurately describes these materials.
The more interesting materials are the unconventional superconductors which have a high transition temperature. From the side of physics, we still have no idea how they work other than something involving many body interactions. Plus with their rising Tc, they might be a route to practical superconductivity.
They're running right at the limits of what is possible.
One easy way to see this: these effects only appear at highest pressures, and I believe that the diamond anvil breaking strength isn't terribly consistent (one defect or error in sample-preparation? poof. Got a good one? You can go farther.). I wouldn't be surprised if each promising iteration of the experiment is performed by loading the anvil all the way to failure.
They're working in the dark -- nobody knows for sure how much pressure is enough to yield the next step forward in knowledge.
If the stars align, a single run of such an experiment is sufficient to yield a quantum leap (sorry) in condensed-matter physics. As an example, this particular iteration may prove career-defining for those involved.
That said, if they're constrained on budget, I suspect they think really hard about the way to optimize discovery potential within their diamond-anvil supply constraints.
Not really related but I broke tons of samples during my PhD work where we applied large stresses in an electric field to brittle tiny pieces of ceramic materials. Sometimes you just need to push the limits to do science.
No; those are independent state variables. If they weren't, your air conditioner wouldn't work. Compressing something does work, which raises temperature, but you can extract that heat and cool it down without reducing the pressure. That's half of the refrigeration cycle right there!
I'm misunderstanding how, given the entirely not normal contrivances to cause this, they can call it room-temperature. I mean, for what definition of "room"?
In the 50s a few megabyte hard drive was the size and weight of a refrigerator-- and came with a huge price tag, impractical price and impractical size. It'll be interesting to see what we can do in a few decades!
'Room-Temperature Superconductivity Achieved for the First Time'
Quanta Magazine ought to know better than to lead a story with a misleading headline like that. Moreover, they've even made it worse by 'exaggerating' the temperature by quoting it in degrees Fahrenheit in stead of Celsius (using Fahrenheit is a no-no in science).
That superconductivity was achieved at room temperature but at that huge pressure is hardly worth reporting as it's of no practical value whatsoever thus only interest to fundamental research.
Fundamental research is of great importance. It is where the breakthroughs of practical value begin.
Our understanding of electricity did not emerge with the lithium battery, transistors, nor solar cells. It began with scientists studying static electricity, Leyden jars, and making frog-legs twitch, all without practical value.
Superconductivity in any context at room temperature is a substantial breakthrough, doubly so because the experimentalists were operating with semi-viable guidance from theory. This is physics at its best. It may well lead to higher-Tc materials.
You didn't read what I said, I was being critical of Quanta's sensational and misleading headlines, I was NOT being critical of fundamental research.
' That superconductivity was achieved at room temperature but at that huge pressure is hardly worth reporting as it's of no practical value whatsoever thus only interest to fundamental research.'
It shouldn't be necessary for me to labor the point, but my quote above does NOT say that that I'm against fundamental research (in fact I'm very much for it and have always been so).
Frankly, I'm annoyed that you can even suggest what you have said. Just because you disagree with part of what I've said does not give you the right to twist my words to mean something altogether different—something I did not say. I am very careful in what I say online and usually I labor the point by restating what I'm saying in different ways so that I'm not misinterpreted (unfortunately I did not do that here as what I was saying seemed clear enough).
With respect to science reporting generally: sensational and misleading reporting by the media—often encouraged by scientist themselves to increase their changes of getting new grants, etc.—has done a great deal of damage to science and scientific research over the last five or so decades. Probably the 'best' example of this is cancer research which has turned great swathes of the public off science big-time because the enthusiastic promises of cures were never met. You only have to see how science and scientific research has dropped in importance in the eyes of the public over the last half century to know that. In other posts I've even spelt this out with examples.
Moreover, I standby what I said, this research into superconductors is so exotic that it is of little practical value now. I am not saying that it won't be value in the future (only time will tell if it will).
It wasn't obvious until a few years ago that high pressure supercomputing was a thing at all.
It's absolutely worth reporting that room temperature supercomputing is achievable. There are a bunch of applications in things like quantum computing that this makes simpler.
The title definitely should have mentioned the pressure needed though. It's the difference between "revolutionary" and "probably no practical applications, but nice for academia I guess".
Quanta is not written for scientists, it's written for lay people. They do good work translating today's science and math into everyday language without too much oversimplification, but it's always a balance.
Most American (and many British) readers use Farenheit to describe the weather and room temperatures, and given the context, it absolutely makes sense to use it in this article.
'Quanta is not written for scientists, it's written for lay people.'
This is all the more reason why Quanta should not misrepresent or exaggerate the research. You should read what I have to say above in reply to ISL 4 about the matter.
'Most American (and many British) readers use Farenheit to describe the weather and room temperatures, and given the context, it absolutely makes sense to use it in this article.'
You're not trying to tell me that Quanta wasn't hamming up the report by using Farenheit are you? Come on, pull the other one.
Next, I suppose you'll be telling me that not enforcing the use of SI units wasn't the reason for Hubble's 'blindness'. The UK is supposed to be metric but unlike Australia the conversion was stuffed up as it wasn't enforced by legalization (in Australia, hardly anyone knows what Farenheit means these days).
The fact that the US is so damn backward in this matter is a combination of factors (a) bloody-libertarian mindedness in that people can't be told even if it's for their own good (just look at the US's fiasco over masks and COVID-19 and you'll get the message); (b) industry (mainly heavy industrial/machining) pulled too many strings in Congress saying the conversion would cost too much; and, (c) the US education system is far too fractionalized to agree upon anything let alone the Metric System/SI units. Thus the US is the laughing stock of the world, as it's the only major power left officially with imperial (British) units. It's even more laughable given what took place in the US in 1776! How many more centuries does the US need?
That said, the people who write for Quanta and Quanta's editorial policy ought to be in line with the rest of the scientific world. Keeping Fahrenheit only 'absolutely makes sense' if one's still living in a time warp that should have long since gone. Sorry!
Wow, this transformed into a weird microrant. I didn't mean to offend - apologies if I have. To be clear - imperial units are awful. Everyone knows this. Americans know it, Brits know it, and while we might occasionally defend this system tongue-in-cheek, we know it's objectively idiotic. We're the ones that have to live with this thing every day.
> b) industry (mainly heavy industrial/machining) pulled too many strings in Congress saying the conversion would cost too much
This is real the only reason. The U.S. is somewhat unique because it had already heavily industrialized prior to the international push toward metric, and had basically no rebuilding to do after the war. It's similar to the technical debt that affects most giant old companies, but on the nation-scale. An (arguably) poor early design choice in industry-building and nation-building.
In history, generally, the largest economic entity gets to use whatever units it wants, and force the rest of its trading partners to deal with it. The British Empire picked their units, which the USA inherited, and kept using due to inertia. If/when either the E.U. or China eclipse the USA's GDP, you can probably expect the USA to try again to shift to metric.
This is all unrelated to anything of course. I assume the writers of Quanta articles are scientifically literate, prefer Metric, but know where most of their readers are from, and have chosen to use their language. I have lived in both England and America, and to this day, I still don't know what room temperature is in Celcius. I really don't think this was written with misleading, malicious intent - it's just an article that wasn't written for you, but rather for (the tremendous amount of) people still living in a "time warp" like me.
More about if or when the US converts to metric. Earlier today, I accidentally came across some new material that was too relevant to these posts to let it pass by. Here's a quote from a part of my post [https://news.ycombinator.com/item?id=24819990] to a related matter:
"... Incidentally, by sheer happenstance, earlier today in connection with another matter altogether, I was reading the preface in one of my old mathematics textbooks, that being Calculus by Stanley I Grossman, first edition 1977 [43 years ago] and I came across an interesting comment that's pertinent to this discussion. I quote:
"… Moreover, I have included "real-world" data whenever possible, to make the examples more meaningful. For example, students are asked to find the escape velocity from Mars, the effective installment rate of a large purchase, and the optimal branching angle between two blood vessels. Finally, as most of the world uses the metric system and even the United States is reluctantly following suit, the majority of the applied examples and problems in the book make use of metric units."
Oh dear, dear, what ever happened? Keep in mind that Grossman was no minor author; he was well in tune with what was happening, he wrote many mathematics textbooks and they were widely used in colleges, universities and US schools, and across the world. Moreover, he wasn't an outsider looking in, he was based at the University of Montana and his publisher was Academic Press which was based in New York!"
Yeah, there was a brief period in the late 70s (1975 - 1982) where we gave metric a go. My parents remember learning it in school, and some large businesses began using both systems to prepare for more strict mandates.
Right, I remember driving on Californian roads around that time which were marked in kms/h, same if I recall correctly in Hawaii, only to later find that they'd reverted back to miles/h!
There's nothing like a bit of hyperbole to get a debate started (and these days there's never enough of what I'd call formal debating going on).
I don't live in the US but I've been there many times and even worked there in a technical capacity (and most of my relatives are US citizens), so I understand where you are coming from so I understand why the US thinks the way it does about temperature measurement.
Another comment I'd make is the fact that the US's continued use of imperial measurements/standards is a significant cost to other countries that have to trade with the US. Exports from the US that are in imperial measurements cause all sorts of compatibility and maintenance problems. Similar problems occur with imports to the US that must be in imperial units; this adds considerable additional cost to manufacturing plants in many countries. Another problem is that in recent years the Chinese have borne much of the brunt of having to manufacture stuff in two standards—stuff in metric for themselves and everyone else except the US, then in imperial for the US. That's the position in theory anyway, the trouble is that the overflow from China's manufacturing for the US often flows elsewhere, so we end up here with nuts, bolts, etc, etc. that are in imperial sizes and this also causes a multitude of problems. You only have to go to one of our major hardware stores to see the problem firsthand, there's thousands of essentially identical (duplicated) items except for one lot being in imperial and the other in metric units (1/4" thread ≈6mm, etc.). This doubling up means huge price hikes because of the need to hold smaller individual quantities and extra line/inventory items not to mention all the extra shelf space. Yes, it's right damn mess because of the US!
Same goes for the US's 110-117 Volt/60 Hz power system which often sees exported equipment blowing up from over voltage on 220-240 volt 50Hz systems (or that said equipment needs 220/240-110V transformers to use them)—as much of the rest of world uses 220-240V—as we don't have the luxury of having huge copper reserves à la Bingham Canyon mine [sorry, as it once had] to waste on thick copper low-voltage busbars. I know from experience, only recently I blew up a very expensive Tektronix spectrum analyzer that was 'correctly' wired with a 220-240V plug but was still internally set for 110V. Even when US manufacturers figure out that there are other power standards from their own and they supply power transformers to suit, we often find that they've forgotten that everyone else is on 50Hz and not 60Hz, so the transformers overheat (lower frequency thus less inductive reactance)!
There's little wonder much of the world gets annoyed with American Exceptionalism.
'I still don't know what room temperature is in Celcius.'
I know this is a commonplace view in the US but I must admit it's pretty odd one for most of us outside the US (except the UK, but nevertheless they've still some notion of Celsius). I'm old enough to remember Fahrenheit. At school the first thing we did when entering the science lab was to write down barometric pressure, humidity and the temperature in both Celsius and Fahrenheit (Fahrenheit in brackets as it was the lesser important of the measurements) in our workbooks. The room temperature is also puzzling, for that one is so well known: 20°C is nominally 68°F. If you've ever done photographic processing and bought a Kodak developing thermometer—even in the US—then it had both scales on it and a big black line at this magic room temperature of 20°C. Moreover, the conversion formula was drummed into everyone until it was mantra (almost all lab equipment is calibrated at 20°C - even in the US):
T(°F) = T(°C) × 9/5 + 32
'I really don't think this was written with misleading, malicious intent'
Room temperature superconductivity itself is not revolutionary for anything except academic papers. Lossless power lines and frictionless trains are a gimmick as those losses are small part of their operating cost. Resistance of copper cables is very small and acceptable already. When price of superconducing cables is comparable to copper cables, then it becomes interesting.
Superconductivity is phenomenologically different than "very low resistance." The excitement about high-temperature superconductors is not only that there will be less power dissipated through power lines, but they will enable new technologies in power generation, high-energy physics research, analog and digital circuitry, sensors, and communications.
As someone who knows nothing about this: what would the most obviously useful applications be, if we discovered practical room-temperature superconductors?
MRI scanners, tiny and mobile, used like ultrasound whenever, wherever at a tiny fraction of the cost. This would be one 'extension' of something we use already.
But the notion of cheap 'extremely powerful' magnets extends into other things - you'd see 'maglev' type stuff everywhere.
I would imagine some kinds of major computational leaps if it could be done at that small scale.
The efficiency of electric generators changes tremendously without resistance - if built into your local windmill it would improve generation quite a lot.
Yes except the last one. Your windmill is not starved of space to make a big dynamo that uses a lot of copper and is very, very efficient. If the superconducting material is cheap you might be able to do it without using nearly as much material or even without magnets. So it might be cheaper and smaller!
The opposite application (generating torque) is already a home for superconductors. The navy has some superconducting motors in its ships. They're way smaller and have high efficiency over a much wider dynamic range of operating torque. Those are cryo-cooled though.
Fusion power becomes a reality, MRI machines get really small and cheap, your computer gets faster and no longer needs to be cooled, better Maglev trains, lossless transmission of electrical power over long distances.
It's a holy grail technology. You would likely solve all energy problems for the human race forever.
Do you realize all those are wishes, not plausible technology right now? Fusion power has more problems than just magnets needed to be kept cold. MRI machines are a niche thing, sure they could get cheaper but why should average Joe care? Computers don't get just magically faster. Computers need semiconductor switches and those operate based on energy dissipation, you can't just remove that and get a better computer. Maglev trains are overpriced gimmick and lossless transmission would be nice but current estimate of energy losses is like 5%, negligible part of the cost.
> It's a holy grail technology.
In the sense lots of people talk about it as something important, but it is never actually seen or used.
Computers would get magically faster, though. You can build standard digital circuits out of superconducting Josephson junctions, which have much higher switching speeds than semiconductor transistors. A flip-flop was demonstrated to operate at 770 GHz.
Josephson junction is a very interesting device with many uses, including low energy consumption per FLOPS.
But one big part of the reason the consumption is so low is low temperature. When you try it with room temperatures, switching energy will go up by 2 orders of magnitude (due to higher thermal noise) and this makes the JJ computer efficiency only somewhat better than CMOS. I agree this would still be interesting as refrigeration can be much simpler then and researching JJ-based computers would be easier.
But if we talk about best power/watt, the 4.2K systems or 77K systems are likely to be better than room temp superconducting computers.
"A flip-flop was demonstrated to operate at 770 GHz."
That's data, but not information - flip-flop Hz is far higher than the processor's Hz (which basically has to synchronise over billions of different circuits and run at the lowest common denominator) - so your figure can't be compared to a normal processor's clock speed, only a normal flip flop's speed.
Anyone know what a normal flip flop's speed runs at?
The only discipline in which SMES are an interesting option is high-power pulse sources. For storage of power plant production they are not interesting, as their energy density is order of magnitude lower than those of batteries. They are too heavy and too big. This problem does not go away just by having room temp superconductor. It would have to have incredibly high critical magnetic field strength and be very cheap.
I think it could scale. If you can directly transfer mechanical stress from the cable to the earth, with no concerns over vacuum or thermal insulation, the economics might work. Even if you can only store ~1 Wh per kg of enclosed rock, it seems possible to enclose millions of kgs of granite in a solenoid.
It depends on more factors than the critical temperature. Keeping the magnets cool is not so difficult, liquid nitrogen is fairly cheap. You want materials that can sustain large magnetic fields before losing superconductivity.
> "power generation, high-energy physics research, analog and digital circuitry, sensors, and communications"
Could you provide some reference to these things that are almost there but wait for temp room semiconductors? Sorry, as written this sounds like a vacuous buzzword drop from MIC contractor.
Isn’t superconducting a requirement for fusion reactors (for plasma confinement)? I think all the cooling required for that is a major impediment to both the design and the efficiency.
You don't need room temp superconductors to make those magnets. REBCO tape can be cooled by liquid nitrogen. It's what they are using for the various tiny Tocamak projects.
This material only appeared in the last 10 years as a practical way of creating strong magnetic fields. Give it enough time and you'll see most MRIs transition away from the large hulking behemoth machines that we have today into much smaller and more portable machines. Also, as an added benefit you can use liquid nitrogen to cool them instead of helium.
And MRI-machines.
Are you able to make superconducting self sustaining outside the laboratory it can have a lot of great implications:
Reduce the cost of MRI (huge helium cost), make extremely fast batteries for regeneration of power, very efficient and small motors +++
MRI machines don't necessarily need helium even today. It just the common model family using Niobium-Tin wires you see in hospitals or media. There are permanent magnet MRI machines.
What do you mean by "fast batteries"? For regenerative breaking one can already use supercapacitors. Will room temp SMES cheaper than that?
Electric motors are pretty efficient and small already, there is 0.6mm motor from Namiki. Sure they can get smaller. How does room temp superconductor help that? Smaller motors have better cooling than big motors so ohmic heating is not a problem.
In existing superconductors, you break superconductivity if your magnetic field reaches a certain threshold. That limit decreases, the closer you are to the temperature limit.
So yeah, higher-temperature superconductors could either give you higher-temperature supermagnets, or same-temperature stronger magnets. The latter is being explored with relatively new materials, for smaller tokamaks, which enabled by stronger magnetic fields.
Having another data point will help in shaping their understanding of the phenomenon, which may lead to future more practical developments. So don't discount the fact this is an academic result.
Power line losses are not insignificant, and put a limit on how far power can be shipped. Solar and wind variability is a problem, but if power could be shipped anywhere on the grid with no transmission line losses (there would still be other losses) it would do a lot to smooth out local variations in power generation.
I don't think the elimination of cable losses is near the top of the list when it comes to discussions around why high temperature SC is revolutionary.
Stronger magnets! Better magnetic bearings! RF circuits.
If superconductors were feasible on integrated circuits, I'd expect the TDP to get lower, enabling higher frequencies and integration (3D?). Frequencies would still be limited by propagation delay, but we have some room to grow. Classic FET wouldn't work, I think, at least not without bringing back switching losses, but there's probably a way to create a magnetic transistor using a few superconductor wires (locally increasing resistance above 0, for instance). Or use FETs with adiabatic computing, who knows? Future seems bright, and applications for room-temperature superconductors are aplenty. It's just that everyone thought of them as a pipe dream, so they aren't really being investigated that I know of.
Superconducting computer will still dissipate heat, as there is minimum cost per element switch due to thermal noise. This makes superconducting computers more efficient at lower temperatures. Room temp superconductor would be cool for making research simpler though.
... with this method. My point is, even if we have room temp room pressure superconductors, this isn't as revolutionary by itself. The price is key and ordinary copper is likely to win in most places except where price does not matter, or where we need the very strongest magnets.
I'm imagining if superconductors become widely used it will be in small quantities and for their unique properties that cannot be achieved otherwise. Not for minor benefits over alternatives?
The current has some limits, but with enough copper you can probably put some serious amperage on it, and a superconductive magnet in "persistent mode" can apparently keep going for months. Not sure how I'd feel about several MWh in a single circuit though.
Issue with flywheels tends to be containment when they explode, flashing a MWh of metal due to containment failure I would hope to be safer.
Additionally a superconductor wouldnt have gyroscopic forces to worry about.
1MWh is almost 1 ton of TNT. If you have that energy in a flywheel or in a electric circuit and something goes wrong, you need containment in both cases.
If the circuit losses superconductivity and a part overheats, it will release 1MWh in a very short time, that will cause an explosion.
You should look up what happened at Cern when they had a superconductor meltdown. And that one wasn't even a burst of power it was a continuous current (admittedly one with enough to power a small city).
Of course anything with that much energy is possibly violent, but at least a flywheel is easy to control. I haven't the faintest idea what that amount of electromagnetic energy would do, I'm not sure I want to find out.
At the very least I suspect you'll find that electromagnetic current also has angular momentum at those scales.