Hacker News new | past | comments | ask | show | jobs | submit login

I don't think it's an advantage to sabotage the adoption of common API like Vulkan. It holds the progress back and I totally blame Apple for pointless NIH and lack of collaboration here.

Apple are doing it out of rather sickening lock-in culture in the company and Metal is far from the only example like that.




They aren't "sabotaging" anything, they are making a perfectly normal design trade-off to implement some features in shaders instead of as fixed-function hardware. By your definition, every modern GPU built in the last decade is "sabotaging" OpenGL 1.x support, because fixed-function vertex and pixel processing hasn't been a thing in that long and it's all done in shaders now, even if you use the legacy APIs.

Alyssa clearly explained how avoiding fixed-function hardware means they can cram more shaders in which means they can increase performance; we have no idea, at this stage, whether this ends up being a net gain or a net loss for, say a Vulkan app. And we probably never will, because we don't have an "AGX-but-it-has-this-stuff-and-fewer-shader-cores-in-the-same-silicon-area" to compare with. And it doesn't matter. In the end OpenGL and Vulkan apps will run fine.

If we ever end up with empirical evidence that these design choices significantly hurt real-world OpenGL and Vulkan workloads in ways which cannot be worked around, you can start complaining about Apple. Until then, there is absolutely no indication that this will be a problem, never mind zero evidence for your conspiracy theory that it was a deliberate attempt by Apple to sabotage other APIs.


I'll agree with you when they'll support Vulkan properly. Until then I see them as a hostile entity to common GPU APIs adoption.


You're talking about macOS. We're talking about AGX2. If you want to complain about Apple's API support in macOS, a discussion about AGX2 support for Linux is not the right venue.

I am, quite honestly, getting very tired of all the off-topic gratuitous Apple bashing in articles about our Linux porting project.


Shmerl pops up on every thread mentioning Vulkan/Apple spouting conspiracy theory nonsense that every design decision is some kind of evil plan to screw over open standards. Ignore him.

Keep up the great work, plenty of people really appreciate it.


And keeps forgeting no one in the games industry, or console vendors for that matter, cares about his conspiracy theories.


Yup, I doubt he has experience working in the games industry. Many engines support multiple graphics APIs and there's often only 1-2 employees implementing/maintaining them so speaking about vendor lock-in is not a strong argument.


How is that a counter argument to anything? The need to support multiple APIs is not free. It's a tax on everything else.


You're right that it's not free. But compared to the whole game engine codebase size the renderer backend is usually not big.


It is a waste of time that could be avoided. And exists only becasue of insistence on lock-in by the likes of Apple.


Lock-in proponents bring their kool aid. Not impressed. Gaming industry is pretty messed up when it comes to lock-in. Everyone is paying this tax.


I am aware of my limitations as human being in this society, speak from actual work experience, and will use any tooling that I rant about when it is on the best interests of the customers, regardless of my personal agenda.

Something to think about, or maybe not.


Yes I agree it's tiresome, I wish hacker news had rules against people making bad faith arguments more so than they do now.


[flagged]


Your argument is equivalent to criticizing ARM for putting in instructions to optimize Javascript into their architecture, as if that "sabotages" every other programming language. Or Intel for putting in instructions to optimize AES into their architecture, as if that "sabotages" Salsa20.

It doesn't make any sense. Of course Apple optimizes Metal for their GPUs and their GPUs for Metal. None of that is hostile towards other APIs. All of this hardware is Turing-complete and by definition can implement any conceivable graphics API. The only question is how well it performs with those APIs, and until we have benchmark numbers, your argument is based on assumptions lacking any evidence.


I'm not sure what Turing completeness argument has to do with anything. Turing machine is also Turing complete. You are going to make GPUs like that.

We are talking about a simple fact - Apple don't care to collaborate on Vulkan, neither when designing their GPUs nor for their OS. I see no point to further argue about facts. And I see criticism of that as completely valid.


Supporting both Vulkan and Metal in a game engine is not a huge task. I work in the games industry, my job is to implement and maintain graphics backends to a renderer engine, so I can speak from experience.


Huge or not, duplication of effort is a tax. And no, it's not trivial as you claim. Especially when some engine wasn't designed from the ground up to address these differences.


I think you haven't effectively countered their point.


Why should they support vulkan? what does apple get out of that apart from less well optimised compute and shader code, using more battery and producing more heat for the same output. (the reason it would be less well optimised is Vulkan is an API designed by a group to be the best compromise of many GPU vendors.

If apple wanted to support vulkan without it being worse than Metal they would either need to add so many apple only extensions that it would be Vulkan in name only or make their GPUs be identical to AMD or Nvidia (unfortunately due to IP patents apple can't just make a copy of AMDs GPUs they need to find another IP partner and that is PowerVR).

If PowerVR had 80% of the GPU market (like Nvidia) the would have pushed Vulkan to line up with a TBDR pipeline but they do not so while you can run Vulkan on a TBDR pipeline you end up throwing away lots and lots of optimisations.


AGX (and SGX) aren't the only TBDR architecture. ARM Mali GPUs are also TBDR, are in plenty of phones, and run Vulkan just fine.


Adding to this, that whole render pass concept in the Vulkan API was the TBDR vendors being very active contributors to the API. More explicitly describing the data flow there allows TBDR arch's to work on multiple parts of modern render graphs simultaneously and keep their tiles filled with work in places where the other synchronization methods wouldn't (or would require the kind of divination on the part of the driver that Vulkan is trying to avoid).


And Larrabee!!! It was the tilyest of them all!


Android developers wouldn't say phones run Vulkan just fine, unless when talking about Samsung and Google devices.

Hence why Google made it a compulsory API on Android 10, to try to tame OEMs in improving their Vulkan story, and yet it is a plain Vulkan 1.1.


The fixed-function hardware that Alyssa assumed existed (vertex attribute fetch, special uniform buffer hardware) doesn't exist on many GPUs outside of mobile. In fact, Vulkan was designed to be a closer match for these GPUs, by making many of the same tradeoffs that Metal did. If they wanted to sabotage Vulkan, choosing the same tradeoffs as it and following the same path as most other GPU vendors doesn't seem like a very effective way to do it.


Apple are doing it because there’s no benefit to them in doing it the way that you want them to.

Apple Pay the piper, and Apple call the tune. Whether you like it or not is immaterial.


Apple has every right to do that as they control their stack. And people who favor FOSS projects have every right to criticize Apple for exercising their control in a way that adversely impacts FOSS.


Sure, but comments like this attribute hostility instead of simple practicality.

> ...pointless NIH and lack of collaboration here. > Apple are doing it out of rather sickening lock-in culture in the company and Metal is far from the only example like that.


On the flip side, comments like

> no benefit to them in doing it the way that you want them to.

portray Apple almost as a helpless besieged small business that should be shielded from critique of its decisions. Whereas they are an industry titan, and people should criticize them as they see fit, even if others don't find merit in the criticisms.

> Whether you like it or not is immaterial

is a completely true, and utterly banal statement, as it can be applied to any opinion made in conversation. No one here has any power over Apple, but we do have the power to free discussion, do we not?


I find such lock-in to be hostile and not something to be excused with practicality. It's like saying ActiveX is practical, don't blame you know who for not supporting HTML, or something the like.


There is a benefit to them, but they're myopic about it. The fact that we had OpenGL and DirectX as a standard meant that a vibrant 3d accelerator market opened up, they made immense advancements since the late 90s. Apple benefitted tremendously from being able to just pick up 2 decades of R&D into GPUs that existed because consumers had a competitive choice. If software had been locked into a single GPU, say a 3dfx Voodoo1, and all software was targeted at a proprietary API and design, how much advancement would have been lost?

Apple didn't even design their own GPU, the IP behind it is largely PowerVR, again, arising from a company trying to compete against ATI, NVidia, 3dfx, Matrox, etc who were running in a bandwidth wall, by taking a big risk with a tile based deferred renderer.

Now look at what is being competed on now? Ray intersection hardware. This is happening because of Raytracing extensions to DirectX and OpenGL. Otherwise you end up with a game console, and while game consoles can leverage their HW maximally, they don't produce necessarily top end HW innovation and performance.


They sure think lock-in is a big benefit for them, that's part of their corporate culture that I was talking about. I'm just saying that it's nasty, bad for progress and it's the wrong way to do things.


    I3DRender render = Engine::GetRender("render-name");
    render->DrawMesh(scene);
So hard, I can't believe how I can manage.


That's not really a fair argument, though, because you know what you're talking about.


Like anyone that learns 3D programming.

In any case, here in an example of such approach, https://www.ogre3d.org/


> Apple are doing it out of rather sickening lock-in culture in the company and Metal is far from the only example like that.

I don’t disagree, but what else they could possibly do?

Metal shipped in 2014 for iOS, in 2015 for OSX. Vulkan 1.0 was released in 2016.

I don’t think it was reasonable to postpone long overdue next gen GPU API for a few years, waiting for some consortium (outside of their control) to come up with API specs. By the time Vulkan 1.0 has released, people were using Metal for couple years already.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: