I don't see what makes this qualitatively more amazing than storing a terabyte in something the size of a wallet... At this point, we're comparing apples and chestnuts.
If you don't frequently feel like you're living in the future, you're living in the past :)
I don't know if "no moving parts" makes it more or less impressive. Imagining a head float on a layer of air that's 0.0125 um thick, with electro magnetic heads always impresses me.
Yes, but the fact that we can exploit that - and that we use the same physics for read heads as those we use for calculating flight - is pretty impressive, is it not?
I'm impressed we can build something mechanical to 0.5 microInches[1] in a mass-produced consumer grade product. And these are little bits of bent metal and coiled wired; not photographic etched wafers of chemically deposited semiconducting material.
A bit is the smallest unit in computer science. It is either 1 or 0. The unit abbreviation for bit is 'b'.
A byte is the next largest unit in cs. It is equal to 8 bits, so it can have 256 different values. The unit abbreviation for byte is 'B'.
There are two scales for measuring large amounts of bits and bytes. one is the base 10 scale from SI that we know and love, the other is base 2. The prefixes that we use are, in order from smallest to largest: 'kilo' 'mega' 'giga' 'tera' 'peta'..., and the abbreviations are 'K' 'M' 'G' 'T' 'P' ...
When using the SI scale, there is an order of magnitude difference of 1000 or 10^3. Eg, 1 Mega_ is 1000 Kilo_. When using the base 2 scale, the order of magnitude difference is 2^10, or 1024. Eg 1 Mega_ is 1024 Kilo_.
When it is not clear from context what scale is being used, you can specify base-two by replacing the second syllable of the prefix with 'bi', so the scale becomes 'Kibi' 'Mebi' 'Gibi' 'Tebi' 'Pebi' ..., and the abbreviation become 'Ki' 'Mi' 'Gi' 'Ti' 'Pi' ... These abbreiations aren't widely used, however, and there is no way to specify that you mean to use the base 10 scale.
This isn't a huge issue, though, because aside from large scale storage (anything bigger than RAM) and sometimes networking, base 2 is assumed.
Great post with useful info, but there's a bit more (if you pardon the
pun) one can add to it:
Eight bits in a byte is a "de facto" standard so it is a "typically"
safe assumption on most modern hardware, but the actual definition of
how many bits are in a byte is hardware (and situation) dependent. There
is no definitive or formal standard that defines how many bits are in a
byte. The range is often between 7 to 12 bits per byte, but I think
some really ancient systems (1940's - 1950's) were below 7 bits per byte.
Good point, I should have used octet, and mentioned that a byte is assumed to be equal to an octet unless otherwise stated. I should probably have also called a bit a fundamental unit and mentioned its relationship to bans.
no, i just couldn't think of a relevant use for it, aside from making it easier to talk about hex. not to mention, it's even less of a standard for measuring things than -bi prefixes.
I agree that it's not super useful when comparing storage systems. Though, in debugging a nibble is one hex digit of a byte which is useful. It's also the next largest cs unit. If you would have said the next largest commonly used unit to measure storage it would have been fairly clear. I just wanted to educate the younger readers(probably not you) about another way to look at bytes and bits. Cheers.
Now we just need ultra highspeed NFC and your entire computing environment can be initialized when you make your first gesture on a touch screen.. or a really fancy replacement for a finger-print scanner for unlocking/imaging your laptop..
I wonder how many erase/write cycles it can sustain. Interesting that the press release doesn't mention it. Previous generations had already descended as low as 3,000 cycles.
I realize wear-leveling makes it a non-issue in many applications, but I do wonder just how flimsy a memory we'll wind up settling for, too.
This is enormously impressive. But what cost? The press release claims "low cost", but fails to specify a concrete number.
If this can be produced cheaply, I can imagine newer laptops, especially those with constricting form factors like the Air or other ultrabooks, using just this and skip the entire SATA interface altogether. Perhaps even bundle the storage with the motherboard for better throughput.
The cost per mm^2 of wafer has more or less stayed constant for the last 30 years, so it should be cheaper to produce per gigabit. Though there might be an initial spike in price due to high demand and low early yields.
It is a very cool accomplishment. And even though it has some of the 'hard disk' disease that denser flash is often slower than less dense flash, I'm less worried than I was before about how fragile flash is relative to hard disks.
This gave me one of those rare "I'm finally living in the future" feelings