I still find it very disappointing that Intel, who now has access to a large portion of laptop users with their integrated GPU's, won't support later than OpenGL 4.0 even with Haswell, while they will support the latest DirectX11.1.
OpenGL 4.3 support could've been very useful in the coming years for indie developers wanting to develop mobile games with OpenGL ES 3.0, which is fully integrated into OpenGL 4.3, and then easily port them to PC's, either native or in WebGL. But since Intel doesn't seem willing to support 4.3 at least until 2-3 generations from now, that means there won't be a whole lot of laptops out there that support OpenGL ES 3.0 out of the gate.
And of course supporting the full OpenGL 4.3 also means they are as serious as Nvidia and AMD about graphics drivers. But apparently they aren't at that stage yet.
Only two years ago, it was common when releasing a game to have players on Intel chips whose latest drivers would not support earlier than OpenGL 1.3. And this was for brand new hardware.
The situation looks way better now.
I think the latest betas of their Intel HD drivers have 4.2 support, at least 4.0. Sadly the last beta was in September I think and still no release of this. But my guess is they'll be getting there within the next 6 months.
Of course, telling users to update their drivers is always a hassle, but oh well... seems like Intel is slowly but surely getting better at GL in their drivers.
BREAKING: just had a Windows update for Intel HD drivers and GPU Caps Viewer tells me, the GPU is now at GL 4.0! That's good stuff... Intel's getting there.
Now if Windows Update would stop hiding such updates under "Optional"......
The reason why OpenGL is seen as more unappealing than DirectX is because it is more unappealing. The debugging tools mentioned are bad, much worse than PIX. They do not offer a way to trace back a pixel and see what caused it to be red, or whatever else. On Win8 they constantly crash on me.
OpenGL error handling is a ton of work. What do I do if something stops working? I pretty much scatter as many glGetError() all over my code until I find one place where it returns something.
The extension system is very hard to understand for beginners and I count myself to that group. First you get a different extension loader recommended to you by every tutorial you find, then you have to set them up differently, sometimes there's only a Linux makefile and what do you do if you get some weird error? How do you check whether it's a missing extension, a bug in your code or a driver bug?
Apparently you can't and you are supposed to start praying. Sorry for the tone, I guess the time lost had an effect on me.
Still receiving frequent updates and new features. It's been exciting to see it come together.
My biggest caveat has been the OpenGL ES support on Android. It only builds properly on Linux, and it doesn't support performance profiling the last time I checked. You're better off using vendor specific tools like Nvidia's PerfHud ES for Tegra devices and PVRTrace for PowerVR devices for that sort of work.
There's also no support for OpenGL ES on iOS devices. Fortunately, Apple's OpenGL ES profiling tools for Xcode are (surprisingly) quite solid, and much less of a pain in the butt to set up than any Android OpenGL ES solution. As long as you're content with there being only one game in that town.
EDIT: And for taming GL Extension Hell, GLEW is still the best; http://glew.sourceforge.net . If you see a tutorial that suggests anything else, it's wrong. :)
Having used both extensively, I am very disappointed with both systems. It is almost as if they do not care about developers any more. OpenGL has little/no incentive to improve itself because it is not tied closely to any single commercial product, and DirectX used to be very well developed, but it seems Microsoft borderline lost interest in it (the leap from 9 to 10 to 10.1 to 11 happened fast, since then it has been fairly stagnant). For all intents and purposes, the average developer is still stuck developing at the DX9 level API to maximize compatibility. OpenGL could be great, but some central party needs to take ownership of it (without putting up a walled garden like Microsoft). As it is now, OpenGL is facing the same problems as HTML5 - a bunch of committees that are slow to make decisions and don't really care that much about the product. Combine that with the fact that we only have two major card manufacturers now (Nvidia and ATI/AMD), and there is just not enough competition to drive the space effectively. Both products are usable, I am just annoyed by their borderline-stagnation.
I disagree, I think GL has been developing quite nicely over the last 2 or so years, driven by lots of innovation coming from Nvidia and ATI, and a close dialog with graphics developers worldwide.
GL is seen as a "crufty", non-OO C-style API but that's actually pretty cool in that it gives you very direct GPU access and gently forces you to think more in a way that matches how current-gen GPUs actually work. I wouldn't any magical abstractions over it. The core profile is great because all the outdated legacy stuff is no longer legal in it, giving you a modern, direct-hardware-acess graphics API with absolutely zero overhead. Driver support keeps getting better, even Intel has entered into GL 4.x land by now. Proprietary Linux drivers and their GL 4.x support are really solid and better than ever. All that remains is for Apple to get up to speed, GL is still at 2.1 or 3.2 depending on your OS X version.
Yeah that is a good point - improvement has not been entirely lacking. But I would still like to see a bit more innovation. For example, one thing that is kind of ridiculous to me is the continued use of multiple render targets. We have long gone past the use of textures for their original purpose, now they are basically general purpose data banks and should be treated as such. I would rather write to one texture with an arbitrary number of channels (even if it occurs differently inside GPU memory) -- and include the depth buffer as one of these channels. Also I would love to see GPU memory get bumped up - we have more or less been resting in the 1-2 GB zone for the past several years thanks to consoles (not an API issue, obviously).
Yeah I've seen those, but 2.1 comp or 3.2 core is what you can "count on officially" as of now. I'd rather have code branches just depending on GL_VERSION, and none ever depending on extensions... if extensions make it in the next core profile, they can make it into my code. Then, if that version shows up on OSX 5 years later, great for those Macster users, they're into retro stuff anyway ;)
It might be a Swedishism, I noticed the SE domain.
In the context of graphics programming (in my experience), the "Open" is often dropped here since it's pretty clear which API is being referred to with just "GL". Easier to say, without the forced slowness of the long "O" in "Open".
blogspot redirects one to a country-specific site, presumably so they can censor your posts according to local customs. This makes it difficult to tell where a site is from. (I believe Mr Lottes lives in America, as he works for NVidia, but I know nothing more about him than that.)
"OpenGL" is a trademark of the Khronos Group, and there are implementations, like Mesa3D, that do not have permission to use that trademark. Mesa's implementation is (somewhat jokingly) known as "MesaGL" but is definitely a reasonable and performant GL implementation. "GL" works well for referring to the graphics language without invoking the trademark.
Almost certainly for the next 10 years, but we'll see. The one language I see on the horizon of even having a chance at displacing C++ at the top companies is Rust. A lot of companies already make use of custom scripting languages for parts of the game logic or design or AI behavior, some of them even use Python or Lua. If that trend continues to gain in popularity and, just as importantly, if toolchains capable of working with such environments improve or are created in the first place, we might see C++ relegated to graphics- and physics-engine stuff only (and slowly dying there) whereas the rest of a AAA game is in a higher level language. I've read that everyone loved GOAL (a lisp) at Naughty Dog once they got used to it, but when Sony bought them out there was no toolchain integration on Sony's side and Andy Gavin had left so no one really had the expertise to carry GOAL further.
I'm happy that JavaScript is becoming popular for games, since in my conversations with game programmers it seems the biggest hurdle to even evaluate another language is their ignorance about what other languages give them that C++ doesn't.
There have actually been employees at one large, undisclosed game company poking at Rust, curious to gauge its suitability for writing game engines. It's far too early yet to make that call, but the interest is definitely encouraging.
Sorry, I don't. This conversation took place on IRC a while ago (maybe a month? less?) and I can't recall the specifics. All I remember is that it was a company large enough to have an R&D division (the user politely declined to mention the company's name), and that he'd been using Rust for several months--highly suprising given the incredibly volatile state of the language throughout the past year.
If you're dead-set on a citation, you can check the IRC logs at http://irclog.gr/#browse/irc.mozilla.org/rust/ ...but be warned that the search function is rather lacking (I had no luck remembering any terms to conjure the correct results). Your best (and most tedious) bet is probably to just run backwards in the history until you find it.
Last time I looked, you could turn the D garbage collector off (both completely and temporarily).
While Unity uses C#, it still seems to use C/C++ components for graphics and physics and such (going by the libraries it uses internally, anyway) and allows you to call native code if you wish.
> Last time I looked, you could turn the D garbage collector off (both completely and temporarily).
You can, but it's hard to ensure a strict no-allocation policy afterwards (someone did it recently).
A moderate amount of allocations each frame is usually ok, and this routinely happens in C++ engines too. In my opinion, it's absolutely possible to make a game with D and GC enabled. It's a bit like not making too much drawcalls in a frame.
The very small list of unsupported APIs are extraneous and unnecessary in the context of cross-platform code. So, what APIs are you referring to, specifically?
Probably, however C++ is changing with the times (if perhaps slightly slower than some people would like). C++ written today taking advantage of everything that C++11 brings looks quite different from C++ written 15 years ago. And likewise I imagine that the C++ that will be written 15 years from now will look equally different from the C++ written today.
OpenGL 4.3 support could've been very useful in the coming years for indie developers wanting to develop mobile games with OpenGL ES 3.0, which is fully integrated into OpenGL 4.3, and then easily port them to PC's, either native or in WebGL. But since Intel doesn't seem willing to support 4.3 at least until 2-3 generations from now, that means there won't be a whole lot of laptops out there that support OpenGL ES 3.0 out of the gate.
And of course supporting the full OpenGL 4.3 also means they are as serious as Nvidia and AMD about graphics drivers. But apparently they aren't at that stage yet.