In short: a scanning tunneling microscope moves individual hydrogen atoms, forming a readable binary code.
New and cool: fully automated, and works well above room temperature (unlike more traditional cryogenic designs).
Expectedly impractical: very-very slow write speed, probably pretty slow read speed, because everything is mechanical. Only a couple dozen bytes actually written in the experiment.
Duplication halves the write speed (and only tell you the data is bad, at half the read speed). ECC would reduce RW speed somewhat still, but, at least, would allow one to correct the data and rewrite it.
Duplication is a form of ECC (admittedly a bad one). If I write everything 3 times, I can correct from any single error, but I will erroneously correct on two errors. If you're limited to duplication, you'd want four copies for 1 error correction and 2 error detection.
It's not hard to imagine using the hydrogen position as a state, rather than presence. if it's in the left cell, on, if it's in the right cell off.
if you've got that then it's not hard to imagine, rather than one single probe pointing down, v, but instead a whole array vvvvvvvv. if the array of probes can be fabricated on a wafer, you can have massive parallelism.
the memory would have cycles, a read 0 phase, any bit that needed to be flipped to one would be picked up, the array shifts to the left, any probe holding an hydrogen would then write. then a read 1 phase, which prepares for the zero writes.
Obviously this depends critically on arrays of probes, which might not be possible. If it is, there's no reason to think this can't be massively parallel.
There is such a limit to areal density. The math involved dives into entropy and the planck length, but the short answer is that any information at a quantum level must be quantized in some form and the smallest possible quantization is the planck length squared, called the "planck area".
The holographic principle says that all of the information required to represent a 3-D volume is encoded on the 2-D surface of its boundary i.e. in one sense there is no difference between e.g. a black hole and its surface.
The idea is that our universe has three spatial dimensions, but only two are needed to represent a given volume of space. All the information is on the boundary surface.
>Hardrives are roughly 1Tb/in^2 in areal density right now. SSD's are a little better.
I am pretty sure 1Tb is 1 Terabit/in^2. Here it is 138 Terabytes / in^2, or 1104 Terabit/in^2, hence the 1000 times difference. Not 100
Western Digital MAMR allows up to 4 Terabit/in^2, so the difference is now slightly closer to ~250x. Then there is Bit pattern media, which should get us to 10 Terabit/in^2.
But it doesn't look like HDD manufacture are in any hurry, they will try to milk the market for as long as possible. The Total HDD market will fall to 50% of its peak volume sometimes next year or in 2020.
Falling PC Shipment. Notebook transition to SSD. The move to cloud computing, where Cloud providers chooses higher capacity unit rather then slightly cheaper price per GB model. Obviously the TCO flavours the former on power and space usage.
True. But but even if you had double the capacity at half the unit shipment, it would still have used half of the motor, half of the casing, assembly, heads etc.....
Low end computers will eventually go SSD too. The cost of a hard drive has a floor for all the mechanics that go into it. Moores law may be petering out but I think it will still deliver usably large SSDs below that floor.
The amusing thing about the Bekenstein bound is that it's per millimetre squared, not per millimetre cubed. The amount of information you can pack into a given region of space scales like length^2, not length^3.
For the sorts of scales human beings are interested in, of course, the Bekenstein bound is a long long way from being relevant, and what matters for us in principle is volume rather than area. (There are practical difficulties in using "substantially" 3-dimensional regions for storage, but they have nothing to do with the danger of the storage device turning into a black hole.)
Yeah, but both are very interesting, as are specific constraints that can be applied, like what's possible in 1D or 2D or 3D and how quantum mechanical phenomena enters into the limitations.
Helps the discussion to bracket possibilities. If there's an absolute hard limit somehow, let people know so they don't assume it's possible to go further.
I find this useful in discussing solar power: there is an absolute maximum of 1300 watts per square meter; most people don't know this and implicitly assume substantially higher is achievable.
No... You could theoretically store information in the spin state of a few valance atoms, but all of the tightly bound atoms are already filling all of the spin states.
Those images almost make it look to be a small abacus. Could more information be written if written as an abacus rather than binary? Stupid question I know since I don't know how an abacus works. But I assumed more information could be represented in the same area.
I do wonder what will happen when storage capacity greatly greatly exceeds network capacity. That is it will only be possible and cheaper to send large amounts of data by physically moving in and not using a digital network.
As sibling commenters have alluded, this is actually already (and has always been) the case, at least for "cheaper", rather than "only possible".
However, looked at another way, it has also never been and never will be true, if one defines "digital network" broadly enough to include any connection between the storage device and, say, a CPU.
Sneakernet or "a station wagon full of tapes" merely increased the bandwidth of one segment of that end-to-end connection. What's usually forgotten is that it doesn't do anything about the bandwidth of, for example, the tape drive (if as many as one are freely available for long enough).
In the more modern world, it can be easy to forget just how huge hard disks are, compared to how much they can transfer. A 12TB drive that can do 120MB/s would need 100k seconds (almost 28 hours) to transfer its entirety.
The situation is particularly severe with "spinning rust" but SSDs seem to be headed in that direction, as densities increase faster than interface speeds (even NVMe).
I can't find it but recently (within the last week) there was an article (HN, Ars, reddit??) about a new record of about 768 Tbps (not Gbps) but real world about 650Tbsp. It used different colours/wavelengths of light, 23 or 43, and did some frequency alteration plus kept most of the overhead at the source not via repeaters.
So yeah the roughly 650Tbps seems usable in such a situation.
There was a time when most people had dial-up Internet access but owned or had access to recordable CD drives and USB flash drives. Many of us had high-capacity (for the day) removable media like the ZIP-100, Zip-250, SyQuest drives, or IOMega Super120. Even with 6 or 10 Mbps DSL it was sometimes faster to take a box full of disks or pull a couple drives from a system and drive them to a friend's or colleague's place to exchange a large amount of data. Sending CDs or Zip disks via UPS or FedEx was a thing for a long time in the design and custom printing industries. Before all of that, there were floppy disks.
Actually, that's highly doubtful. A backup solution will have to be able to actually restore a meaningful fraction of the stored data to be useful. Worse yet, if seeking is expensive, even smallish but scattered amounts of data can be problematic.
Tapes suffer from that big time. They've grown in storage space, comparable to hard drives, but since it's normal to have libraries with many tapes per drive, there's severe practical utility limits even for backups.
Why do we need such high density slow storage anyway? Would that lead to more energy-efficient systems, even if just due to space reduction? I mean, for regular data (video, music, images, text), we already have better resolutions than we can perceive (and going much further would be just mainly wasteful, and bottlenecks seem to be processing that data), and big services that host this kind of content, like youtube and others, well... can't be expected, on a long term, to not forget most of its content. So besides a few research applications... genuinely, is there any really important application that we have big trouble handling right now? I mean, the world storage requirements won't keep increasing forever, no?
New and cool: fully automated, and works well above room temperature (unlike more traditional cryogenic designs).
Expectedly impractical: very-very slow write speed, probably pretty slow read speed, because everything is mechanical. Only a couple dozen bytes actually written in the experiment.