Hacker News new | past | comments | ask | show | jobs | submit login
3D Portability Initiative (khronos.org)
91 points by ooyy on March 1, 2017 | hide | past | favorite | 50 comments



This proposal makes much more sense than Apples "Metal Only" solution (https://webkit.org/blog/7380/next-generation-3d-graphics-on-...). I liked Metal at the beginning but over time it turned out that it is just another OpenGL/DX flavor with the same old driver/performance problems. Also using SPIR as IL allows the use of the preferred shader language. WebGL Next is about parallelism and abstraction, otherwise there would be absolutely no benefit.


Can you comment a bit more on the limitations of Metal?

I've been spending the last few days reading through the docs and learning how to use Metal. So, I'm curious what you (and anyone else, I guess) perceive its shortcomings to be.


My personal biggest concern was the re-invention of OpenGL/DX features without the proper knowledge why and how these features can be used. For example I never managed to apply a skeletal animation to a indirect indexed draw call or use transform feedback loops for ray-casting or other deferred shading techniques.


In accordance with federal law (http://n-gate.com/), I'm obligated to point out this Rust library, which does exactly this! https://github.com/gfx-rs/gfx. I worked on the second version of the API in 2014 and it's only gotten better since. The major thing this particular abstraction still doesn't solve is shaders: you need to write a shader per backend you support. Shader portability is a bit of a sticky problem.


Huh. I thought Vulkan was supposed to be the new portable 3D standard from Khronos. Now we need a new new standard on top of the old new standard?


Supposed to be, but Apple and probably parts of Microsoft don't want to play ball.


So why would they play ball with this?


I understand this proposal as suggesting to build an API that can be implemented efficiently on top of the vendor-specific APIs, without the vendors doing anything. E.g. similar to how the ANGLE library offers OpenGL on top of Direct3D APIs, for Windows PCs without/with bad OpenGL support.


Is Khronos going to implement something? Because AFAIK they only write the specs for others to implement them - so it is up to each vendor to make the API available. If Apple and Microsoft aren't going to implement Vulkan, why would they implement something else?


The point of Khronos was never to implement anything. It serves as a coordination mechanism among industry players, where the specs for OpenGL, Vulkan etc. are debated, hashed out and formalized.

I'm not exactly sure what your last question means. Vulkan will be available on Windows. I think it may be more correct to say NVidia/AMD/Intel are implementing Vulkan APIs in their graphics drivers and Windows will expose the API to applications. At the same time, they do seem to see benefits in creating other APIs at the same time, i.e. DirectX 12 and Metal. So we'll see I guess.


Recently, Khronos has been also implementing some of the infrastructure, like the Vulkan loader and validation layers, the GLSL-to-SpirV compiler (glslang) and the SpirV tools which are open source and developed by LunarG and others.

The actual driver parts are written by hardware vendors (IHV's in Khronos-speak) but there's a push to put less stuff in the drivers and more stuff that is common to all vendors.

Not sure about this project, though. Looks to me like an effort to get the relevant parties discuss this in GDC and see what comes up.


It could be that this is a specification on how to implement some new API which operates consistently across the trio; i.e. "this function invokes X with parameters Y in Vulkan, X with Y in Metal, X with Y in DirectX" since that is no trivial task to do right and extensively.

But I'm sure that'd also include a real implementation.. so.. yeah.


Khronos is an industry standards group. It is composed of companies that implement Khronos APIs. So yes, in a way.


Khronos is a group composed of companies like Apple and Microsoft. If Khronos reaches consensus about something being a good idea, then that necessarily means that the member companies think it's a good idea—and so will implement it.


This seems like a better approach than Apple's WebGPU in that it is more realistic (depends on using a usable subset of the three APIs that is reasonably common) and, not started by the same company that is the biggest obstacle to cross-platform graphics APIs in the first place.

It seems unfortunate that it is necessary though. It is clear that neither Metal or DirectX can be the cross-platform API we'd all like so, it seems a shame Microsoft and Apple can't just support Vulkan.


Why not just port some subset of Vulkan to Apple?


There is a commercial, proprietary Vulkan implementation (called MoltenVk) that runs on top of Apple's Metal.

This is not a technical issue, but a political one. Apple has decided not to allow Vulkan in their walled garden.


This sounds like a response to Apple's WebGPU initiative?[1]

https://news.ycombinator.com/item?id=13593272


The emulation cost for non Apple Systems would be enormous to implement this solution (= SLOW). SPIR could be used native on most devices.


I'm not sure why Vulkan couldn't work on windows: isn't it mostly in the hands of driver developers (i.e. Nvidia/AMD) rather than Microsoft?


Microsoft is not just Windows, some mobile devices and more importantly the XBox as well. On Windows, the GPU makers can (and do, at least partially/beta) provide drivers with Vulkan support, yes.


It does work on Windows. Not sure about Windows Phone though.


UWP only allows for DX, but one can built other APIs on top, e.g. ANGLE for OpenGL support.


Who needs a "next generation WebGL?" It's hard to find an interesting or useful WebGL site. There are plenty of demos and ads, but few sites worth visiting.

Here, go shoot some zombies.

http://www.y8.com/games/abandoned_island

That's about as good as it gets.


Same was said about sound, Javascript, web video, and pretty much any addition on top of basic markup.

I think future web will be better and/or can consume less power when sites have more direct control of rendering.

Emphasis on power usage -- I think they (= browser vendors) should give a lot of consideration how to enable developers to save power. Web rendering is already mainly done on GPUs anyways, but there's opportunity to push less pixels to achieve the same results as the current rendering methods. Pushing less pixels equals less power. Of course passive power savings (from developers' point of view) are also good.

WebGL must also not perceptibly increase page load times, a few milliseconds is acceptable.

If the features turn out to be useful, when reliable WebGL is ubiquitous, various javascript libraries will start to use it and number of sites using WebGL will increase rapidly.


WebGL must also not perceptibly increase page load times, a few milliseconds is acceptable.

Almost every non-trivial WebGL page I've seen shows a "Loading" message for some period of time.


Well, I meant just fixed base cost in milliseconds. In other words, how long it takes to just output something extremely simple, say, a flat shaded triangle without external libraries. Not to measure resource loading and other initialization.

Of course initialization and possible shader language, SPIR or whatever compilation will take time on top of that.

Many current WebGL apps require a lot of large data, textures, meshes and libraries. Obviously loading those will take time.


Flash had an scheduled loading scheme, so the timeline specified when an asset would be sent as you played a Flash animation. Macromedia had authoring tools which would try to slot the assets in the timeline so they'd be available when needed, while staying under a bandwidth limit.

You could, in theory, do that with WebGL. Google is all fired up about "preloading" lately. This is an authoring tool problem, and if WebGL games get serious, there will probably be author-side tools for this.


One look at today's web should tell you how much developers get paid for conserving power, as opposed to adding bling.


Which is why I only do web dev when I get paid to do it, for me programming for fun means native applications.


Google maps doesn't work without WebGL, and some other map related websites like mappilary.


It seems to work in Firefox with WebGL disabled.


Right, but the proposal is about an API which lets web applications do shader-language stuff beyond the level of WebGL. Is this really necessary?


The new APIs (Vulkan, DX12, Metal) are mostly about performance. I can't think of any major thing that you can't implement in WebGL. But, a portable 3d API would be awesome for native apps. And, why not have that be the 3d API for the web too? Like WebGL / OpenGL ES 2.


Why on Earth (no pun intended) should Google Maps require WebGL?

Another example of people losing focus on providing actual value with the products they make.


Phones.


It's really the only way forward for widely accessible GPU programming, since the native scene is so fragmented and such a kaleidoscope of OS-crashing buggy drivers. Of course it's possible that will remain a niche thing, but with the just arrived WebGL 2 and imminently arriving shared memory in Web Workers there's a lot of new possiblities opening up. All this AI-on-GPU stuff that's in the headlines lately, for example.

The recently discussed Qt remote WebGL app feature is a nice example that you can do even with WebGL 1 (https://news.ycombinator.com/item?id=13744631)

Also a lot of non-game 3D customers historically have been inhouse scientific / data visualization users. I suspect many of those are already moving to WebGL & internal web apps, but you won't see them much on public web sites.

It took a good while for interesting or useful JS apps to arrive too...


It is not easy to pull of impressive things with an API that is lagging 10 years behind what is available on desktop while fighting all kinds of browser idiosyncrasies.

We've got those super powerful processors that can be one or two orders of magnitude faster than CPUs, but their performance is mostly wasted because there is no way to do general compute on GPUs in browsers.

But even so, there are still some pretty cool things being done with WebGL:

https://www.shadertoy.com/ http://glslsandbox.com/


Medical imaging comes to mind. Probably lots of other stuff. Ikea and others already have those kitchen planners, they can still be improved a lot.


Isn't the de-facto answer to this 'Unity' or 'Unreal Engine' ?


Those seem like de-facto work-arounds for the absence of 3D portability. Using an engine locks you into the constructs of that engine, though. It seems like it'd be kind of backwards to do that if you're using an API to get closer to the hardware, for performance purposes.


And a full-blown game engine is a very heavyweight solution for a lot of lightweight 3D problems, especially if you're looking at Web delivery. (One of the stated goals for this initiative is a next-gen WebGL.)


Could work as a preprocessor on the dev side for quickly outputting each native API code from a more general form, probably designed as a superset above SPIR-V (bound to be a bit more verbose than pure native code, but even a 100% larger project is better than 2+ discrete ones). Maybe even JIT if they can essentially extend LLVM (or at least generate direct hooks through the preproc).

I don't see much 'impossible' on the tech front, however politically... it's just my intuition but somehow I see Microsoft and Apple not cooperating much beyond WebGL efforts, and actually baking progress into their own proprietary APIs/platforms.

The matter of the fact is that as of today it's still profitable to develop 3D applications even under the restrictions posed by proprietary APIs, and OpenGL/Vulkan are still optionally usable too (though generally worse performing on Windows or Mac, notably because there's no efforts by MS or Apple to help). So there's really not much incentive for opened platforms. Essentially users are absorbing the costs (more money and/or less features, e.g. cross-platform game which sunk costs in porting/licenses rather than content), but so far don't complain much about it (probably because there's no comparison to be made in the real-world with opened platforms since none of the commercially successful OS is truly open at the consumer level).

As so often, nice spirit by Khronos, unfortunately they're so lonely on that road, surrounded by giants with conflicting interests and much wider preoccupations (read: emphasis on building an ecosystem + verticality of control). See Surface lineups, or even Google, now making Pixels. Actually the latter is a prime candidate to further Khronos' goals, I wonder what the 3D API landscape will look like on their alleged x86 efforts.


Unless I'm misunderstanding you, those aren't the answer. I think the article title is a bit too vague. They're talking about probability of 3D for hardware accelerated rendering. Unity and Unreal Engine are complete game engines. Hardware accelerated rendering gets a lot of focus on games, but is used quite a bit elsewhere; apps for video editing, sound editing, 3d content generation (Maya, Houdini), CAD (the originator of OpenGL), and I think the UI on all major platforms are all hardware accelerated. Also, many game studios write their own engines in house because those game engines are big and a lot of "baggage" can come with that.


Yes they are.

HN folk doesn't like to hear it, but that is how the game industry has worked since the early 80's where each gaming platform was a special snowflake.

There is a thriving industry of game engines and middleware that grew from there and the majority of game studios doesn't spend more than the time required to select which middleware they want to use.

Even for those that build their own engines, the graphics abstraction layer is a tiny portion of the engine, versus everything else that isn't covered by Khronos standards. Meaning GUI support, font, texture and shaders loading, mesh rendering, scene descriptions, game editors,....

Usually adding a new rendering backend to an engine doesn't take more than around one month, the biggest time slice being consumed by time to first triangle.

Given that Vulkan specification already has 74 extensions and is only at version 1.0.42, already bringing up the fun of "write once, port multiple times" from OpenGL, I am not confident this is going anywhere.

https://www.khronos.org/registry/vulkan/specs/1.0-extensions...


> Given that Vulkan specification already has 74 extensions and is only at version 1.0.42, already bringing up the fun of "write once, port multiple times" from OpenGL

This is a dishonest and decieving way of putting things.

The difference between Vulkan and OpenGL here is that most of the functionality extensions are software only additions and are available everywhere with up-to-date drivers.

Mobile is a bit of a pain point because they don't get driver updates as nicely.

If you look at Vulkan extensions qualitatively (not just the numbers) you'll see that it's nothing at all like the OpenGL extension mess. The vast majority is window system integration (WSI) and extenral memory and multi device stuff. Platform specific stuff which would exist in WGL/GLX/EGL for OpenGL and can't reasonably be put in "core" (because everyone must support all of core).

It was perhaps a mistake to attempt to release Vulkan 1.0 as early as they did, it might have been a better idea to put out 0.9 beta version and accept that there will be API changes.

However, very few changes have been made to the core apis.


It is not being dishonest as alternative APIs don't suffer from extension explosion.

Also I don't believe that by following this path Vulkan will be any better than OpenGL, ES, WebGL.

Let's see how many Vulkan 2.0 will have and of what kind.


Yes, yes it is. If the extensions would be anything like the OpenGL extensions which make changes to the core api functionality, you'd be right. But take a look at the extensions and tell which one(s) would cause a "write once, port everywhere" situation.

I can't name a single extension that would be causing any friction in Vulkan, unlike OpenGL where there's lots of code paths that need to be implemented to do even basic stuff (and all the core/ARB/EXT variations).

Yeah, Vulkan has some extra complexity stemming from being cross-vendor and designed by a commitee. But it's nowhere nearly as bad as you imply.

In my experience with Vulkan, it's all the optional features and device capabilities that need to be queried at runtime and decisions made based on that that's causing complexity with portability (you don't need to care about most of the extensions, and the ones you do have to care about are simpler than their OpenGL counterparts, ie. WSI vs. GLX/WGL/EGL). And apart from a few pain points (like numbers of queues exposed), it's relatively straightforward and comparable to D3D or Metal in complexity.


Yes, but Unity, Unreal and the other engines need a 3d API to run on. They can use Vulkan, D3D or Metal on native desktop/mobile platforms but that's already 3 separate backends and there's no good API for the browser (WebGL being suboptimal in many ways and about 15 years behind the native platforms).


Isn't the industry getting a little fatigued by now?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: