Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does anyone know if there has been progress in volatile memory tech? I'm hoping for 1TB RAM chips with 10x faster access times than current RAM. It will enable PC gaming to deliver unmatched immersive experiences, among other applications.


Especially if this can be accessed as VRAM directly and just as fast by the GPU. Infinite-detail fractal-resolution destructable-animatable voxel terrains come to mind (not the Minecraft kind of "voxel", mind)..


The thing is at this point a lot of latency comes from the fact that each dimm is physically displaced from the cpu.

When you are on a timescale of nanoseconds even electricity's speed can be slow when being compared to something like intel's l4 cache which is on-die.

For any type of volatile memory that latency will exist until mobo designers move the ram closer to the cpu or adopt optical interfaces between parts.

For some cool calculations that can put things in perspective take the speed of light as 299 792 458 m/s and take the time of a nanosecond as 10^-9 seconds to get ~0.3 m/ns for light. That means that for every third of a meter the dimms are away from the cpu means a constant 1ns delay in terms of latency.


Gaming is limited by the CPU/GPU, and a bit by the bandwidth in between. Not by the RAM speed/latency.


Not strictly true. Most algorithm choices in gaming can make trade offs between CPU and RAM. If you increase available RAM, you can usually use algorithms that are more RAM hungry and less CPU hungry for large speed benefits.


If we're talking about game logic, I agree.

If we're talking about rendering beautiful complex worlds with post-processed special effects, I disagree.


In 1 TB of RAM you can keep without any compression 3d array 1000m * 1000m * 64m of voxels with 1 voxel being a cube 2 cm * 2 cm * 2 cm. And you can lookup it randomly with negligible latency and do real time raytracing on it.

If that won't change games I don't know what can.

Besides GPUs will obviously also use this technology if it really works.


this needs to be higher up.. that's awesome to know! ... we might actually be able to emulate the Ps3 on the computer! hah


I don't know what is special about emulating a ps3 on a PC, most mainstream PCs and GPUs are faster than the Power-PC based cell processor in the ps3 (an 8 year old console). Even the ps4 does not contain any better graphics processing capabilities than a recent relatively high end PC.


Even if you can look them up fast, doesn't mean you can run the raycasting logic fast enough on that amount of detail.


I'm a graphics programmer.

The only reason to prefer the GPU is because it confers an advantage over the traditional CPU+RAM combination. There's nothing inherently special about a modern GPU. The GPU is a sequence of actions and abilities encoded into hardware, e.g. the ability to automatically perform various kinds of texture filtering transparently to the game developer.

Since the GPU is hardware, and since hardware is less flexible than software, a graphics programmer would always prefer a software-based pipeline to a hardware-based one. The reason hardware pipelines are preferred is strictly because their advantages outweigh their disadvantages. Typically, using a GPU enables graphics programmers to create renderers which are 10-100x more efficient than software-based renderers, so the added flexibility of a software rasterizer tends to be forgotten in the face of massive efficiency enabled by the GPU.

The GPU primarily became popular because (a) it offloaded part of the computation from the CPU to dedicated hardware, freeing up the CPU for other tasks like game logic, AI, and more recently physics computations (though nVidia is trying hard to convince developers that hardware-accelerated physics is a viable concept), (b) GPUs increased the amount of available memory, and (c) GPUs dramatically increased the throughput (memory operations per second) of graphics memory.

Memory latency plays a key role in many modern graphics algorithms, such as voxel-based renderers. It's often the case that an algorithm needs to repeatedly cast rays against a voxel structure until hitting some kind of geometry. Therefore, within an individual pixel of the screen to be rendered, this type of algorithm can be hard to parallelize because typically the raycasting can't be broken up into parallelizable steps. It typically looks like, "While not hit: traceAlongRay();" for each pixel, each frame. I.e. this algorithm can only trace one section of the ray at a time before tracing the next.

That raycasting algorithm is memory-latency-bound because it completes only when it finishes looking up enough memory locations that it detects the ray has intersected some 3D geometry. In other words, by reducing memory latency by 2x, and assuming memory bandwidth is sufficient, then this algorithm will complete twice as fast. This means instead of 24 frames per second, you might get 48 frames per second.

So, all that said, if it becomes common to have 1TB of regular RAM with the latency and bandwidth traditionally offered by GPUs, along with a surplus of available CPU cores to offload computations to, then software renderers will once again become preferable to GPU renderers. A software pipeline will always be more flexible and easier to maintain than a hardware pipeline, simply because the featureset of the software pipeline isn't restricted to the capabilities of the videocard hardware it's executing on. It's also easier to debug and maintain.

All of that means that it'll be easier for art pipelines to produce more complex, more immersive visual experiences than at present. But replacing the traditional GPU-based renderer with a CPU-based software renderer will only be practical if there's a major advance of RAM technology in the future, because current RAM tech can't match the memory bandwidth / latency of a modern GPU. Hence, any major developments in the area of volatile memory tech will be extremely interesting to graphics programmers.


http://www.xbitlabs.com/news/cpu/display/20130305060258_AMD_...

AMD Kaveri CPU will support GDDR5 apparently. Furthermore, DDR4 is set to appear soon, with significant improvements in bandwidth / latency / power.

So RAM tech may be lagging behind, but by 2014 or 2015, we'll start to see some interesting advances in RAM.


And a large part of what makes a better GPU is RAM bandwidth. If you're using integrated graphics there actually tends to be a quite large difference between using system RAM clocked at 1066 and RAM clocked at 1866. If you're using a discrete GPU card the RAM that makes a difference to your gaming performance is already soldered onto the card, but that might still see an improvement from faster memory technologies since the people who make those cards could use it.


I think you are confusing RAM speed and the bus/PCIe bandwidth.


If you're using integrated graphics there isn't any PCI bus involved. Even when graphics was off-die it was on the Southbridge.

And the bandwidth between the GPU and the GDDR on the graphics card doesn't have anything to do with the PCI bus either, except to a small extent when synchronizing with the CPU or initially loading textures or whatever.


Apart from the cutting-edge APUs (like AMD Kaveri implementing hUMA), the memory of integrated GPUs and previous generation APUs was separate and copying chunks of it in between happened via the bus.

Plus since we're talking about cutting-edge gaming, integrated graphics is irrelevant (and so are APUs).


Consider moving around very fast in a very big world. 1 TB RAM would help alot :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: