Really intriguing. My father was in the data storage industry (magnetic tape), so I've always watched innovations in this sector with great interest. Nine times out of ten, the storage-density breakthroughs you read about end up being commercially meaningless because they aren't thermally or chemically stable. Doesn't matter how many TB/cm^2 you're writing, if the material degrades after a couple of weeks or months. This announcement is dramatically different, which strikes me as really interesting.
Manufacturability is the other obstacle which data storage inventions traditionally founder upon, on the road to commercialisation. I don't have a sense of whether or not that'll be an issue here. If it isn't a problem... then yes, this could be a genuine game-changer.
If commercially viable, this looks very disruptive to me: if everyone can get virtually unlimited, ultra-fast, lasts-a-million-years local storage in all their Internet-connected devices, the planet's online needs would change significantly in unpredictable ways. For example, third-party cloud-storage providers would face intense price pressure as more customers find it cheaper to store data locally and pay for more bandwidth.
--
PS. Does anyone here have sufficient knowledge of or experience with Photonics and nanostructured glass to comment on the near-term feasibility and commercial viability of this technology? Here's the paper: http://www.orc.soton.ac.uk/fileadmin/downloads/5D_Data_Stora...
Keep in mind that a positive research result is not necessarily anywhere close to any of usability, commercial viability, or even possibility outside the lab. Nuclear fusion is a prominent example of this, and I'm still waiting for MRAM, which has been "around the corner" for quite some time now. Until memory glass may be available, who knows what else is.
Everspin needs to market it better... but yeah, if you're a Computer Engineer, you can build MRAM systems. The issue now is to build computers that actually take advantage of MRAM.
Its a niche product. No one wants MRAM right now, so the people who produce it can sell it for however much they want to.
Its not necessarily better than DDR3 RAM as RAM, and its significantly less dense than Flash RAM. The practical applications of MRAM are not quite as popular as once thought...
So... it remains a niche product, with a niche price.
Again, MRAM is a niche product. So is a crypto chip. If you want a crypto chip, they are already available with Flash RAM / Microcontroller combos. There's no need to use MRAM for that application, when Flash is already widespread and cheap.
I disagree with your irrelevant comparison to fusion technologies. It's not like they have to create crazy powerful magnetic confinement fields here. It's lasers and glass. The CD went from an initiative at Phillips in 1974 to release in 1983. 9 years to fully develop and do a production release.
I think it's inspiring, and I'm thankful I'm alive to see all of it unfold before us.
It's ultrafast femtosecond pulse lasers, spatial light modulators, and lab-quality fused silica glass. Nobody has ever put any of those into a consumer product before. The CD combined microscopic feature casting in plastic (same technology used for phonograph records since at least the 1930s if not the 1890s), metal-plating of plastic (from the 1950s), room-temperature semiconductor lasers (from the 1970s, although I don't know when their first mass-market product was), error-correction codes (commonly used since the 1940s), and PCM (from the 1930s, but, I think, only then being rolled out on a grand scale for digital telephony). The only one of the component technologies that might have had uncertainty as to its suitability for mass-market uses would have been the semiconductor laser diode, and in theory it wasn't necessary — you could have built CD players with HeNe lasers like the ones being rolled out in supermarket barcode scanners at the time, and which had been used for freight-car barcode scanners for a decade, but they would have been heavy and fragile like a tube radio or fluorescent light, not rugged and lightweight.
Aside from this, the storage technology itself might turn out not to work. It's holographic, and extrapolating is perilous in holography — some small source of noise that isn't significant when you have a megabyte of data recorded might turn out to be overwhelming when you have ten gigabytes of data recorded, let alone hundreds of terabytes.
Also, it occurs to me that megapixel spatial light modulators are also the key element in megapixel projector displays, so that might also be already ready for prime time.
TI just released an FRAM memory (instead of FLASH) version of the MSP430.
It has several features which make it better suited to ultra-low power operation.
I know FRAM was popular 10 years or so ago, but never really made it big. I'd always assumed someone had a patent locked up that made other companies avoid it. Maybe that's changed recently?
MRAM and FRAM are commercially available solutions, but the status quo of the computer industry is to build SSDs out of much much slower (but far more dense) Flash RAM.
> Keep in mind that a positive research result is not necessarily anywhere close to any of usability, commercial viability, or even possibility outside the lab. Nuclear fusion is a prominent example of this, and I'm still waiting for MRAM
I, for one, am still waiting for those jelly fish based CPU :(
They don't actually claim it's fast or give any measurements of speed (apart from "ultrafast laser"). It seems reasonable to be on the order of other laser-based optical storage systems (CD, DVD, blueray).
EDIT see also https://news.ycombinator.com/item?id=6034075
Instead, longevity and data density is emphasised, and application for long-term storage. Sounds like a replacement for tape.
If the read/write/access speed is going to be good enough then perhaps it might replace HDD and depend on SSD as another cache level. The SSD cache would need to be page-file-based (just like OS caches file pages in RAM). block-based caching would be horribly inefficient in this case.
This also got me thinking about a filesystem for the 'superman memory'. It could be an extremely simple logging filesystem without garbage collection. Since the storage is supposed to be enourmous this could work for more than just a an incremental backup.
"third-party cloud-storage providers would face intense price pressure"
…or it could go the other way and they would be able to offer "eternal cumulative storage" for a low fee. This tech will be big and expensive before it's small and ubiquitous. There may be a first mover advantage to being able to afford the early models.
In the article they don't say if it's a re-writable device.
Supposing it's not, it would be useful only for big archives : you want to use the full 360 Tb available, because the crystal can be expensive. An average user don't have 360 Tb to write everyday.
If the crystal was cheap, or if you could build smaller crystals that store less Tb , it would be different.
As lifeformed said, nothing is said about the reading speed.
Or you create a novel filesystem that allows you to use an exotically large capacity, write-once volume as a less exotically large, write-many volume with built-in, block-level history.
Seems pretty easy. Just store your file system in some immutable tree structure (ala Haskell or Clojure, where "changes" don't actually change the tree but instead create a new tree referencing parts of the old tree).
I worked in optics in grad school. I think that near-term commercial viability is not good. The precision optics and the ultrafast laser would have to be heavily adapted to work outside the lab, and would be needed for both read and write.
It's not disruptive unless it's not only commercially viable but actually cheaper, and I know that without even having read Christensen's book. Also I think they said they were writing data at some small multiple of 200 kilobits per second, which hardly qualifies as "ultra-fast".
Available data will expand to exceed capacity. The amount of data created (or desired to be stored) will always exceed capacity to store (and process) it.
It would be disruptive, same as every other couple of orders magnitude storage in the past and in the future.
They are getting 12 kbit/s right now, and claim they could get Mbit/s to Gbit/s using very fast light modulators (which themselves are research-lab projects, to my knowledge).[1]
The phrase "ultra-fast" laser is a term of art, which really means "laser with ultra-short pulses". They are using a laser with pretty short, 280 fs, pulses, but only a 200 kHz repetition rate.[2] It evidently takes several pulses to write one bit. Due to the physics of femtosecond lasers, you can't easily increase the rep rate without compromising the pulse length and intensity, which both need to be very good in this application.
The readout "should" be faster than "conventional" methods, but no details are given.[1] I don't see evidence that they have shown even moderate speed readout. It looks like they took micrographs and then inspected the images to determine the values of the bits.
A "balanced" computer system used to follow the "one second" heuristic. (Is this named after somebody?) That is you should be able to offload temporary memory (core) to permanent storage in one second. And it should be able to execute a computation on every element of memory in one second too. I know this has held up through the gigaword generation. Leading edge computers, like a supercomputer, may be unbalanced where one of these factors lags.
Let me correct myself. I came across Jim Grey's 1999 paper "Rules of Thumb in Data Engineering" where he states that this heuristic is from Gene Amdahl in about 1965: a balanced system has one bit of I/O per second for each instruction per second and one byte of memory for each instruction per second. So, 8 MIPS per MBpsIO, and one MB/MIPS.
"The experiments were performed with a femtosecond laser system Pharos (Light Conversion Ltd.) operating at 1030 nm and delivering 8 μJ pulses of 280 fs at 200 kHz repetition rate."
My €0.02 bet is that that means 200 kilobit per second. In layman's terms: about twice as fast as we could send data to Voyager 1 when it was near Jupiter aka 'abysmal'.
"delivering 8 μJ pulses of 280 fs at 200 kHz repetition rate."
8.0E-6 J * 2.0E5/s = 1.6 J/s = 1.6W
So, this laser outputs 1.6W of power (in very short pulses, but that is irrelevant here). Let's assume a LED powered laser. http://en.wikipedia.org/wiki/Energy_conversion_efficiency gives, optimistically, 35% efficiency for a LED. That would mean a single-laser version would use 5W, a 100-laser one 500W of input power.
=> Some improvements are needed before this replaces hard disks at home. We'll see whether there is room for that.
Not really - it would just continue the trend of offline local storage and maybe make the Internet more distributed. Pretty much only cloud hosting providers would be affected.
ISPs would probably be pressured into finally going big or going home with their bandwidth, too, which is a good thing.
The new Utah NSA facility is planning for zetabyte (million petabyte) bandwidths. We dont use this much yet, but may during the lifetime of that facility. Humankind has an insatiable appetite for digital storage.
Storage is already pretty cheap, but it's a pain in the ass to manage, because you need to handle backups and make sure they are not in one spot, so a fire won't destroy 50 years of your photos. That's why cloud services and online backup is so successful recently. People want somebody else to secure their data, and they are willing to pay for it.
Our future is small computers + high speed internet available everywhere + online storage and services. Not the other way around.
"Our future is small computers + high speed internet available everywhere + online storage and services. Not the other way around."
Assuming our current technology, maybe. If I have limitless storage, and you do, too, I can be your backup, and you mine. Trickling in the background, always backing up. Or we could all have a mesh network backup doing similar to Dropbox crossed with Napster, without the centralization of either. Depending on bandwidth and storage growth, of course. Maybe the future is small computers, high-speed Internet available everywhere, and mass mesh storage/backup. Hopefully with encryption, however it goes, please God.
Mesh networks are still online storage from a users perspective in that it's limited by the users network connection. Also there is some software capable of this the problem is giving up a fair amount of bandwidth and storage capacity for somewhat safe but free backups vs paying ~50$ a year for safer backups without the headache.
PS: Popular torrent files are basically this already as they can persist a long time after the initial seeder stops seeding.
In this case, you're only limited by your network speed in the event of data-destroying event. In the normal case it's completely local, which has much less headache than any network option most of the time.
As much as I am discomforted by continued erosion of privacy and as much as I intend to try and keep mine intact as possible, I think this backlash will come to essentially nothing. Convenience is too great so people would just gradually learn to live in a world where mostly anything you do is public unless you go to great lengths to hide it. For the next generation, it just would be the new norm and I don't think we can do much about it. We can try to minimize harm done to civil liberties, etc. but I don't think we can prevent the change from happening.
"Our future is small computers + high speed internet available everywhere + online storage and services. Not the other way around."
That's nice. If you perform a cursory examination of computer science history, you will find many examples of "what's new is old." Any new paradigm, usage scenario, software type, or what have you could invert that. Even the one you like so much ... I think they had these things called dumb terminals and mainframes at one point ...
Storage isn't so cheap when you consider that it doesn't last very long, so we're constantly having to move stuff to newer (thankfully usually bigger) storage devices.
Though I too suspect the NSA must be using some form of dense, long-lifetime data-in-solid storage technology that we haven't been told of. I can believe them storing 'all comms' in real time using hard disks, but I find it hard to believe they'd be anticipating refreshing the entire archive onto new media every 10 years of so. Which they'd have to do with hard disks or DVD equivalents. And I flatly refuse to believe they'd be prepared to just let it go.
There was an announcement back in 1999 from Keele University in UK, of having succeeded with using NMR to store and retrieve data in solids. The lead researcher was Prof. Ted Williams, who led development of the nuclear magnetic resonance scanner.
Then suddenly... nothing more was heard of that.
There's just something about this idea that sounds extremely likely. That not only is the NSA spying on everyone, but also maybe keeping to themselves a revolutionary storage technology that would change everything. So far I haven't heard anything at all about _how_ the NSA is storing all that data, have you?
We should be building eternal, public archives of all the cultural and scientific knowledge of humankind. Instead we're building vast secret archives of everyone's tweets, lunch appointments and credit card transactions.
It's well known that several hundred patents a year get swept up under a "secrecy order" and disappear. But most of these are from a) defense contractors working on military systems and b) nuclear research.
Occasionally (maybe 20 per year), an inventor unaffiliated with the government will attempt to get a patent, and have it get swept up under a secrecy order.
But a new type of data storage? I would surely think the government would let this get developed and refined in the private arena.
Cutting edge development is a sinkhole for massive amounts of money. Just think how many hard drives a billion dollars could buy? I don't think it makes sense economically, when there is a commonly available alternative.
What worries me more is the possibility that the oil industry has spies with control over this "secrecy order". The fabled "200 mile per gallon carburetor" type stuff.
It is quite easy for me to believe that a trillion dollar industry would attempt to protect itself by any means possible.
>What worries me more is the possibility that the oil industry has spies with control over this "secrecy order". The fabled "200 mile per gallon carburetor" type stuff.
>It is quite easy for me to believe that a trillion dollar industry would attempt to protect itself by any means possible.
all the engineering calculations for a car you can make yourself starting from the first principles, like
They probably filter out most of the data. They're building a huge new datacenter; why would they do that if they have the capacity to store all that data already?
I initially interpreted "5D" in the title as five spatial dimensions, which would be incredible, but the reality is entirely credible:
> The information encoding is realised in five dimensions: the size and orientation in addition to the three dimensional position of these nanostructures.
I wonder how orientation is just a single degree of freedom though; somehow one of the polarization directions of the light must get lost along the way.
Perhaps it is because this is a crystal and there is a relationship or redundancy between the different axes of orientation.
EDIT: Actually, this [1] paper says in the abstract that they use wavelength, polarization and the three spatial dimensions. So it is wavelength and not direction!
It's just a conceptual link. Think about the higher dimensional symmetries in quasicrystals and how one might encode information using a mechanism that exploits that property.
Some other works resulting from their research announced on http://www.femtoprint.eu/ also sound quite interesting, e.g. "a first transparent actuator fabricated using the femtoprint process", sized in millimeters.
This stuff is really fascinating to me. Will we ever hit a point in data storage where you can fit all the data about a volume in space, into a smaller volume of data storage? I'm sure the Uncertainty Principle plays into it, but theoretically could it be done--store 1 cubic meter's worth of data into a 1 cubic micrometer of storage?
No, since you could fill a cubic meter with the storage media. And then you need to record the data on the storage media as well as the structure of the storage media.
> If file sizes could be specified accurate to the bit, for any file size N, there would be precisely 2^(N+1)-1 possible files of N bits or smaller. In order for a file of size X to be mapped to some smaller size Y, some file of size Y or smaller must be mapped to a file of size X or larger. The only way lossless compression can work is if some possible files can be identified as being more probable than others; in that scenario, the likely files will be shrunk and the unlikely ones will grow.
As a simple example, suppose that one wishes to store losslessly a file in which the bits are random and independent, but instead of 50% of the bits being set, only 33% are. One could compress such a file by taking each pair of bits and writing "0" if both bits were clear, "10" if the first bit was set and the second one not, "110" if the second was set and the first not, or "111" if both bits were set. The effect would be that each pair of bits would become one bit 44% of the time, two bits 22% of the time, and three bits 33% of the time. While some strings of data would grow, others would shrink; the ones that shrank would--if the probability distribution was as expected--outnumber those that grow (4/9 files would shrink by a bit, 2/9 would stay the same, and 3/9 would grow).
You can't really, by definition. By the same logic you should be able to store 1 cubic meter's worth of data storage cube into a 1 cubic micrometer of data storage cube, which in turn could be stored in a smaller cube ad infinitum.
Going by the same logic you could store all entropy of the universe in an infinitely small cube which is not possible unless the universe had zero entropy.
Sort of, according to the holographic principle. Since there is a limit on information density in space, you should be able to reduce a 3-dimentional volume to a 2-dimentional digital data structure. https://en.wikipedia.org/wiki/Holographic_principle
Tinsy nitpick. The universe technically didn't exist back then (at least not in the form of spacetime), I believe. Existence itself was confined to a singularity which then expanded with tar -xzf singularity.tar.gz
It doesn't even matter in the first place if it could store 'all' the information about a space, because it's physically impossible to measure that data.
But when you talk about recreating a volume, in a manner that is at least vaguely possible: you could store a record of every atom at angstrom-level resolution in much less space than the original.
I am not a physicist, but isn't that already possible, depending on the contents of the volume? I can store all the data about a 1km^3 vacuum containing a single atom at the centre in a volume much smaller than 1km^3.
This is really exciting. I hope it does come to market. There's a need for stable, huge, slow, storage which isn't really covered by drives at the moment.
A company called InPhase has had various forms of holographic storage "ready any time now!" for years. They eventually released a product. It's specialist, and expensive. That company eventually folded. (I remember when they were using concept art using a credit-card sized thing for media. That was ten years ago, and 1 TB was exciting.)
This is a fascinating result. The equipment needed to reproduce is a bit extreme (femptosecond multi-watt lasers are not 'garden variety' by any means) and one wonders if the 'read' set-up could be made cost effectively than the 'write' set-up. I would hope that Google would invest in a system to copy all of the books they've scanned into such crystals.
That said, I could imagine offering something like a web 'snapshot' (think Internet Archive meets the DVD-R :-) which one might subscribe to. That would be pretty cool.
Funny how they use crystals in many movies for data storage and I always found it a bit stupid... But in reality, it looks like they're the future of storage - I remember they experimented with writing data using lasers to cubes about a decade ago...
They will have to change the name before they market it. I can't imagine many people understanding anything with 5D in the name, regardless of what it means.
Since when do people have to understand a name for it to take off? Look at the soup of technologies grouped under the label "4G." All consumers know is that 4G sounds better than 3G.
I'm sure if the technology is commercialized, someone will come up with a cute nonsensical name for it, but I can absolutely see a subhead somewhere saying "Advanced Five-dimensional storage technology!", just because it sounds cool.
Excellent! Seeing as how Superman released in 1978, I expect scientists to have figured out the Arc Reactor and Repulsor technology sometime in late 2048.
Why does this same magic crystal storage article keep poping up every 5 years? They always claim they could be very close to commercialisation and that it could change the world forever. Yet 25 years later, I fail to see any progress.
EDIT: I don't pretend I can understand the technical details, does anyone know if this is actually anything different from that old holographic stuff?
Really, how can they get away with the word "unlimited" in the title there? The limit is mentioned in the first paragraph: 360TB/disc (whatever a disc is... though I assume it is something which does not exist in unlimited quantities).
Still, nice work! Data storage capacity is a great metric of our digital quality of life.
"[...]At the moment companies have to back up their archives every five to ten years because hard-drive memory has a relatively short lifespan,” says Jingyu.
Since when hard drives die after 5-10 years? Are there some new physics laws that I'm not aware of, or what?
Google's 2007 study found, based on a large field sample
of drives, that actual annualized failure rates (AFRs) for
individual drives ranged from 1.7% for first year drives to
over 8.6% for three-year old drives
Haven't you experienced it? My HDDs all wear out eventually... there's only so many times a thing can spin 7200rpm just nanometers off the ground before something gives.
I'm guessing this is with MTFB rates given by manufacturers. With continuous use, it's not that uncommon to for hard drives to reach EOL fairly quickly.
Along a simliar vein I've heard it proposed you could store information in a diamond using different isotopes of carbon as binary 1 and 0's. That would last a tremendously long time and I believe have a huge storage capacity.
I can't quite tell from any of the articles about this if it's write-once-read-many or read-write. Given that it's about "fused quartz" I am suspecting write-once.
Nice to see Southampton in the news. This is part of the department I study Computer Science in, and it's this research focus that drew me to Southampton. Awesome place.
How awesome would it be if this was then used by NSA to provide agents with backups of all snooped data, available in their pockets, in a piece of glass!
For all you know they've built (or something else equally advanced) already. Which would make claims that they're storing everything pretty believable.
Manufacturability is the other obstacle which data storage inventions traditionally founder upon, on the road to commercialisation. I don't have a sense of whether or not that'll be an issue here. If it isn't a problem... then yes, this could be a genuine game-changer.