Hacker News new | past | comments | ask | show | jobs | submit login
AMD says DirectX is hobbling PC graphics (bit-tech.net)
71 points by iwwr on March 19, 2011 | hide | past | favorite | 25 comments



Some of the Slashdot comments[1][2][3] for this clarify what he really means. It seems like the guy from AMD overgeneralized his remarks for the press.

1: http://games.slashdot.org/comments.pl?sid=2044704&cid=35...

2: http://games.slashdot.org/comments.pl?sid=2044704&cid=35...

3: http://games.slashdot.org/comments.pl?sid=2044704&cid=35...


[High-end PC GPUs are ten times better than console GPUs, so why don't PC games look ten times better?

Game companies can't optimize games primarily for the best possible PCs. Hardcore gamers who build their own high-end gaming systems pay the same $50 for a game as do casual gamers who run them on laptops[0]. There's a point of diminishing returns when trying to make a game look spectacular on the best hardware but able to degrade to run on what the majority of the market actually has. Compounding the effect is the fact that it's no longer necessary to constantly upgrade a computer to keep it useful, so casual gamers aren't upgrading hardware for non-gaming purposes the way they used to.

[0] Game companies try to get a bit more money out of hardcore gamers through special editions, but that's limited.


I think game companies can optimize for high-end PCs for the same reason that car companies can make high-end models: you don't get a reputation for being a kick-ass game for running ok on ok hardware, you get that by being kick-ass on hardware that most people wouldn't even dream of buying because the people that write reviews do run on such systems.


Actually, I think you get a reputation for being a kick-ass game by having awesome gameplay and (when appropriate) a good story. Sure, looking pretty is nice, but I don't think graphics matter quite as much as the gaming industry thinks.


>I think game companies can optimize for high-end PCs for the same reason that car companies can make high-end models

Will be interesting to see how Battlefield 3 sells compared to Black Ops. CoD has looked good at times but has never blown me away graphically. However it mints money from the general market. Battlefield and the Frostbite engine look several years ahead of the CoD engine. What will the sales look like?

>High-end PC GPUs are ten times better than console GPUs, so why don't PC games look ten times better?

Curious, would love to hear Valves thoughts on this too. Several Source based are critically considered the best of the best, but again the graphics have only looked good at times yet Valve is printing money.


Except the problem with that mentality is that you now have to start charging more (as car companies do for their high-end models). How much money could Id Software or Epic games make for a title that requires a $2000 PC to play? How many copies could they sell? Would the support nightmare be worth it? Think Ferrari/Porsche money compared to Honda/Toyota money. What do you "drive"?

That being said - I would like to see someone try this, and I would probably buy said game (unless it was another stupid CoD clone).


Once upon a time there was a Blit terminal [1], which allowed processes to display content in independent windows. One process managed the windows, others could draw only in own windows; the terminal took care of that.

In an ideal future, GPU could be extended to support concurrent access from several processes, each with own context. The GPU would enforce separation; each process would draw to own window. Without going through syscalls...

----

[1] http://en.wikipedia.org/wiki/Blit_(computer_terminal)


Okay, AMD: Stop making new graphics cards. Stop the market churn, let it get uniform and predictable and then let developers catch up and start stretching the metal. Remember, content has to catch up, too, and that's several years worth of effort.

But in the mean time, your competitors are going to eat your lunch.

That's what consoles do. They provide a large, uniform ecosystem which makes it possible for developers to stretch performance without breaking the bank on testing or risking tens of millions of dollars on a buggy dud.

Any time you want to declare your graphics cards "a console," feel free to stop introducing higher powered hardware.


I think the hardware guys want to offer an alternative, that that doesn't penalize them for sucking at some aspect of the DirectX API model.

In the way, way back times there was a spunky startup called 3dfx which made the 'Voodoo' video card. It was fast and it had enough fill rate that doing mip-mapped textures really flew. There was another company called nVidia that made the a new graphics engine based on NURBs. 3Dfx published a straight forward API called 'Glide' which was pretty close to the metal and nVidia partnered with Sega to publish an API to this new engine.

It was a great time to program graphics, I had both cards and really liked the Glide API (even wrote a toy 3D graphics engine on top of it). But one of the things that I didn't like was that if you got a game it would have the 'Voodoo' version (as was the case with the Tomb Raider 1 and 2) or the 'Sega' version. They would look great with the right hardware and they would look like crap without it (falling back to software rendering).

So Microsoft created an API to rule them all and said do your best with this API, and while games were not as impressive as they might be with a direct/dedicated API, they were at least workable on several different configs (and the PC has a bazillion number of configurations). Once the graphics hardware crossed the minimum acceptable visual threshold somewhere in the Nvidia TNT2 / PowerVR / Voodoo2 era with the crappy, but functional, DirectX 3, the convenience of not having to patch was trumping best fidelity for most of the market. All of the private APIs stopped being worked on at that point.

There was the OpenGL/DirectX debate, but even Carmack suggests that without an active SGI pushing it forward that DirectX has eclipsed OpenGL it in terms of capability.

So now that DirectX is so dominant, we come to AMD's issue. AMD (actually the old ATI now a part of AMD) has some really killer graphics architect types. They can imagine really really cool ways of connecting CPUs to GPUs to memory which would allow stunning realism with less work on the part of the programmer (which is code for 'even lame programmers would look good.') Except that programmers won't program to a graphic card feature if it isn't in Direct X because then they have a fork in their code base or it sucks on vendor Y which doesn't do X etc.

So for a graphics architecture type at ATI(AMD) to get his or her cool feature in production, they have to design it, build it, convince Microsoft to add it to DirectX, possibly give up rights to some of the IP so that others can implement something like it, and then wait for everyone to catch up so that when Direct X version n+1 ships you can use it on your card which is now 2 years old.

Kinda sucks doesn't it?

If ATI(AMD) offered their own API that they wrote then they could update their API at the same rate they rolled out new silicon features and be much more agile. Hence our friend Huddy in the referenced article trying to make the argument that you (the game developer) would be better off if you weren't "held back" by Direct X.

I've talked with folks like Huddy in the past and have suggested that one strategy for making this happen would be to go 'open' with the API. Which is to say publish the source code that could be compiled on open source systems so that non-Windows OS's could support high performance 3D acceleration (all features) without having to run a Windows driver or the Windows OS, on the hope that in doing so we might be able to create a 'better' gaming/graphics experience outside of Microsoft's control, which in turn would put pressure on Microsoft should that loss of control be perceived as threatening their core business.

The challenge for that strategy is the whole "we are stuck in the X11 model stone age" that most of these OS's are mired in. (I am so tempted to resurrect SunTools, it probably sucks more than I remember but it was really fast even on a Sun3 so it should be like prescient on a modern machine)

So you've got the graphics card vendors looking for more flexibility, that is a good thing. We should try to leverage that to get better (usable) open source drivers out them.


The interesting thing about this comment is that OGL allows such vendor specific extensions without totally destroying everything else. No need to sit around for 2-3 years and wait on MS, people can start programming for that hardware relatively easily. My understanding (not too involved in 3D) is that vendors usually mimic any unique extensions that their competitors might have, but even so it's easy to test caps and turn off a feature if hardware doesn't support it (in fact, all games do this, even DX games; you still have to test caps and profile hw to know what code paths to run and what features to enable). A well-architected engine is flexible and allows you to write sfx even if only one card on the market supports that feature without much hassle, and this is a lot easier to do in OGL with its extensions model than DX, afaik.

Again, not too heavy on the 3D, please correct anything I got wrong.

I agree about open drivers, though. There is such a wide field of possibilities with open drivers if vendors would only take them seriously and quit being so paranoid about "their IP" and all that. I think great OSS drivers is one of the most important goals for desktop Linux atm.


The challenge for that strategy is the whole "we are stuck in the X11 model stone age" that most of these OS's are mired in.

Are Gallium3D or even Wayland (in the distant future...) going to make any difference here? Or is it still crap?


"There was the OpenGL/DirectX debate, but even Carmack suggests that without an active SGI pushing it forward that DirectX has eclipsed OpenGL it in terms of capability."

Ironically it's the higher level of DirectX that Carmack praises nowadays, with its very high level tools, development process and general approach. In that regard I've always found OpenGL to be closer to the metal than DX while still being hardware independent, in the sense that C is hardware independent as opposed to assembly. DX always struck me as "more managed" in some way. I'm not a seasoned 3d developer though, and have been mostly hacking stuff around.

"So for a graphics architecture type at ATI(AMD) to get his or her cool feature in production, they have to design it, build it, convince Microsoft to add it to DirectX..."

In that regard OpenGL evolution works a bit more like the recent HTML and CSS history in that chip maker creates a driver which provides current standard OpenGL calls and a bunch of innovative proprietary extensions, readily available right now, just like -webkit-* and -moz-* CSS stuff.


I've never heard of suntools before. I googled it and found sunView on wikipedia: http://en.wikipedia.org/wiki/SunView.

Reminds me of Haiku OS (open source BeOS clone), Amiga, and Linux Framebuffer. All sit directly (afaik) on the kernel, fast.


In an ideal future, we'd have something like x86, but for graphics hardware. Standardizing the assembly language would go a long ways towards encouraging good compiler design and third-party libraries for rendering.


Strongly disagree - the fact that GPU ISA is undocumented let GPU vendors to innovate faster, and let them modify ISA not only between models, but also inside the same architecture family.

For NVIDIA only documented PTX, while actual ISA is changed frequently. Same with AMD: CAL is documented, but the actual low-level ISA can change. For example recently they moved from VLIW5 to VLIW4.

I also think, that reliance on hard ISA (x86 or ARM) in modern server-desktop-mobile world is doing more harm, than good.

Today there are just too many layers/levels, for example in NVIDIA:

  CUDA -> LLVM -> PTX -> native ISA
  CUDA -> LLVM -> x86 -> Intel Microcode
  OpenCL -> CUDA -> LLVM -> PTX -> native ISA
  OpenCL -> CUDA -> LLVM -> x86 -> Intel Microcode
or in AMD:

  OpenCL -> LLVM -> CAL -> native ISA
  OpenCL -> LLVM -> x86 -> AMD Microcode
The C was that kind of universal ISA in the RISC UNIX world. But C can't express Vectorization and SIMD efficiently and in platform independent way. Enter OpenCL!


Strongly agree =). Once I was coding assembly and I thought I might be able to avoid operating-system-specific graphic APIs by going through openGL.

Unfortunately, both openGL and DirectX are C libraries. A graphics driver is just a C library translating openGL api to your cards undocumented internals. As a result, you talk with C calling conventions and C data structures (which are a chore without a C compilers help).

This shouldn't limit innovation. It is just specifying the calling convention and structures (and choosing ones that are easy in ASM). A C library would then sit on top of the ASM one, which is the natural order of things =).


I strongly disagree. Enforcing an ISA would prevent a lot of the technological advances in graphics card design.


I think unifying the assembly language would be quite difficult given the large churn in the underlying hardware architectures as the GPUs evolve. DirectX is mostly fine, and it has served well to allow games to continue to operate even when they were developed against much older versions of the API.

Perhaps more work can be done driver side. For example, allow developers to hint at the formats of data used so the driver doesn't always have to check degenerate conditions, like textures of a certain size or mismatches in vertex shader outputs and pixel shader inputs. Maybe the old capabilities model of DX9 would allow the option of a faster path.


That future might be happening already. Both AMD and Intel have products that pair CPU with GPU, ARM has Mali, NVidia's Tegra has on-chip GPU.

I think discrete GPUs will become niche products in the future. Once mainstream GPUs are on-chip, the variety of different GPU architectures will probably be reduced. The next step might be a standard ISA for GPU.

It is hard to say where GPUs will be in three years, but at least the industry is getting interesting again. It has been more of the same for so many years in discrete GPUs, but now the on-chip GPUs are potentially game changing.


Even though your vision is probably right, I'm not entirely happy with it. I love choice, I love being able to choose a certain processor and GPU and upgrading one of them after a year.

I'm probably the minority though, so business wise it makes sense.


The future will be a framebuffer. We're moving to a GPU-on-core future. The graphics card simply becomes a framebuffer at that point - ala video cards from the 80's and 90's.

So the likely future? x86+gpu extensions or arm+gpu extensions.


Obviously, games "looking then times better" is poorly defined. Even still, I would argue that PC gaming actually is many times better graphically than the 360. Most PC gamers have at least 1080p resolution, and even modest PC hardware can handle that. Modern "HD" consoles, however, almost invariably run their games (especially the blockbuster action titles) at a much lower resolution like 600p or 660p. Also, PC games have had antialiasing for many years, something many (if not most) console titles noticeably lack.


4x AA is standard to 360 games.


After playing Halo 3 on a 1080p display, I find that hard to believe.


4x AA can't fix 640p resolution on a 1080p display.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: