Hacker News new | past | comments | ask | show | jobs | submit login

Since the article and the linked nVidia marketing page both fail to adequately explain what Optimus is, I'm going to attempt to do so.

nVidia's GPU technology focuses on maximum performance. Their GPU's are power-hungry, are capable of rendering billions of triangles per second, and have an interesting programming interface to support operations such as custom shading. (Within the last few years nVidia and AMD have improved support for use of their GPU's by computation-intensive non-graphical applications. For example, people use GPU's for Bitcoin mining, for the good and simple reason that their computational throughput on SHA-256 hashes blows CPU's out of the water [1].)

Intel's GPU technology focuses on low power and low cost; as long as their GPU's can run the Windows Aero GUI and are capable of playing streaming video, it seems Intel doesn't feel a need to push their performance. The combination of low power and low cost means Intel GPU's are ubiquitous in non-gaming laptops.

nVidia has observed that Intel GPU's have become cheap enough that it's viable to put both a weak, low-power Intel GPU and a strong, high-power nVidia GPU in the same laptop. nVidia calls this combination Optimus.

Under Optimus, the Intel GPU is used by default for graphically light usage, meaning Web browsing, spreadsheets, homework, taxes, programming (other than 3D applications), watching video, etc. -- since it's low-power, you get good battery life.

When it's time to play games, of course, the driver can switch on the nVidia GPU -- hopefully near a power outlet.

I think Optimus technology has been available for over two years. However, nVidia's Linux drivers do not yet support it, leading to much gnashing of teeth, and the third-party Bumblebee solution which I discuss in another comment [2].

[1] https://en.bitcoin.it/wiki/Mining_hardware_comparison

[2] http://news.ycombinator.com/item?id=4471411




You basically missed the entire point of it or why it's interesting.

Current Intel CPUs have a very small GPU built on to the die of the CPU. By buying the CPU, you're paying for an Intel GPU anyway.

At the same time, unlike CPU performance, GPU performance really does scale with die area; if you want more graphics performance, get a bigger chip with more ALUs/texture units/etc, and because graphics is so parallel everything will just go faster. However, larger chips mean larger amounts of leakage when the chip is powered but idle, which means that battery life can be significantly worse.

What Optimus does is allow the Intel GPU to be connected to the display hardware and be used most of the time (e.g., when you're looking at email or whatever) when high performance isn't called for. At that point, the NVIDIA GPU can be turned off completely, meaning no leakage and no battery life degradation. If you want high performance, the NVIDIA GPU is enabled on the fly, rendering is done on the NVIDIA GPU, and the final result is sent in some way to the Intel GPU, which can then use its actual display connections to put something on the LCD.

Prior to Optimus, there was generally a mux that would switch which GPU was outputting to the screen. This was messy because everything had to be done on one GPU or the other, it was noticeably heavyweight, occasionally required reboots, increased hardware complexity, etc.

The biggest issue with Optimus on Linux is that the infrastructure for the actual sharing of the rendered output didn't really exist until dmabuf appeared--you need two drivers to be able to safely share a piece of pinned system memory such that they can both DMA to/from that memory and be protected from any sort of bad behavior from each other. (I also think it's impossible to have two different drivers sharing the same X screen, which is why bumblebee works the way it does)


> Current Intel CPUs have a very small GPU built on to the die of the CPU. By buying the CPU, you're paying for an Intel GPU anyway.

I was not aware of this fact.

> the Intel GPU to be connected to the display hardware and be used most of the time...the NVIDIA GPU is enabled on the fly...

I did mention these aspects of Optimus.

> rendering is done on the NVIDIA GPU, and the final result is sent in some way to the Intel GPU, which can then use its actual display connections to put something on the LCD.

I guess I missed the point that the architecture is like this:

nVidia <-> Intel <-> Display

instead of like this:

Display <-> nVidia

Display <-> Intel

I was a little fuzzy on this point myself, so I appreciate the clarification!

> the infrastructure for the actual sharing of the rendered output didn't really exist until dmabuf appeared

I'd certainly believe that the current approach to the nVidia driver was enabled by dmabuf. But Bumblebee shows it's possible to use Optimus on Linux without that particular kernel feature.


The lack of any sort of physical connection to the NVIDIA GPU's display outputs is the fundamental feature of Optimus. Switchable graphics existed for years before Optimus introduced (and was usually usable under Linux without issue), but it was largely a niche feature because of the usability drawbacks.


> it seems Intel doesn't feel a need to push their performance.

Then why does the new HD4000 have twice the performance of the previous version? :)

Intel is trying its best. It just takes time to catch up to the lead of ATI and NVidia. With the HD4000 it's quite a bit closer.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: