Hacker News new | past | comments | ask | show | jobs | submit login
Retina iMac owners can greatly improve graphics performance of many applications (1014.org)
150 points by nom on Feb 3, 2017 | hide | past | favorite | 28 comments



If you're an Apple user and you want Apple to consider improving the performance of 10-bit channel 2D bitmap blitting performace (as this post seems to demonstrate is poor), file a bug report with Apple about it at https://bugreport.apple.com. They'll probably mark your bug as a duplicate. They've indicated in the past that duplicates are how they accept votes from the community about bug importance. Every vote counts.


Small correction: duplicates are not really "votes", but some managers at Apple do see them as speaking to impact of a bug, and almost everyone takes impact into account when scheduling the resources on fixing bugs. So if you want your bug fixed: 1. Do everything you can to show your bug clearly and reproducibly. No-one is going to spend much time on a bug they can't see. 2. Keep your bug report focused on the problem in front of you, don't stray off on tangents. That give the best chance that your bug gets fixed, rather than dismissed based on the tangents. 3. Include some sort of impact statement for your customers/organization. If this is causing lots of people serious problems, then Apple managers (and the screeners before that) are more likely to care about it more than some other problem that only causes a minor irritation for a few people. So don't just aim for a "+1", tell them how many people you DIRECTLY represent.

But some engineering managers don't care about duplicates, they relay on other things (sometimes only their own judgement) to decide impact and its relation to priority. But very few of us ever know which person is going to make those decisions, and what their priorities are, so it is better to always file your bugs (even if they get duped), and include impact in your filing.

If you want to make life easier for the screeners (who do most of the duping), then look things up on https://openradar.appspot.com, and reference a bug that already does a great job of deciding the problem there. But then include YOUR impact information.


I'm loyal to Apple to a degree that would get my US visa canceled if it actually were a religion, but even I must say that I don't feel like participating in anything like the process you're describing. I feel developers should actually boycott radar at this point. It's not just that the process you're describing involves an inordinate amount of my time for something that is at least as valuable to Apple as it is to me, only because the richest software company in history can't get its system to parity with github issues or – may Steve have mercy on my soul – bugzilla.

It's also that the whole process seems to be engineered to belittle the user who apparently isn't worthy of being informed of anything. Reported a bug in an API? Well, how about trying it after every release, or writing a test for it? Because you surly don't expect your tiny mind to warrant a one-liner or ticket change when we fix it?

JK! We actually have no intention to fix it. But you do enter the sweepstakes for the "One More Thing That Doesn't Work" award at WWDC for every year that your bug remains open.


*at the expense of disabling 10bit per channel color.

For most apps this should be fine, but as deep color gains more prominence be aware of impacts it may have.


Right, most people using audio applications probably don't really care about 10 bit color.


Not just at the expense of disabling 10bit per channel color, but at the expense of breaking color reproduction.


If you cared you could make an ICC profile representing an accurate characterization of the display, but limited to 8 bits/channel. I doubt the video game players who are focused on blitting the maximum number of textured triangles per second are too worried though.


This is very handy. As a voice actor I use REAPER exclusively -- it's pretty much the "IDE" for my job, and I use the 5k iMac as my hardware. Getting better graphic performance out of the plugins makes my work faster and more accurate.

Hopefully it doesn't degrade performance on video editing or Pixelmator.


doesn't really apply to me, but I was excited to see the author of this post. REAPER is simply an amazing program for music creation, and a great price.. I use it every day for many years now. He coded WinAMP back in the day too !


You could say he really whips the llama's ass


What is REAPER doing that it needs to fling so many bitmaps to the screen at once? Wouldn't that be done more efficiently with GPU texture wrangling?


Probably things like spectrograms, waveforms, etc. For a lot of effects it doesn't really pay to GPU accelerate their drawing - in this case looks like REAPER is using custom drawing code into an image, but the compositing these image widgets together with native Apple drawing functions onto the final window plane. Doing the final compositing step with the GPU and shader color conversion is one option. Two other options I'm surprised weren't mentioned were drawing the image widgets in the final colorspace to avoid the conversion step, and specifically requesting an 8-bit sRGB window for system compositor-side conversion (I'm not sure if OS X supports the latter).


I can't comment on Reaper specifically, but to those who haven't used audio applications they often have more in common with a game than a regular desktop application.

The software is often generating live displays in response to interactive or live input (ie. the audio being synthesised, processed or played); everything from fancy spectrograms to meters showing levels.

GPU features like texturing works great if you can upload everything you need in advance from the CPU, and then re-use it again and again on the GPU side. However, in the case of these sorts of applications that input is one or many audio streams which are constantly changing.

I'd bet you /could/ implement more of the processing on the GPU itself. But audio apps like this are often multi platform and based on existing codebases or plugins. Pixel-pushing to the screen seems rather boring, but it typically throws up fewer bugs; I'd say it makes more commercial sense to spend time working on audio features than debugging GPU issues on a wide range of platforms.


I'm not saying it's easy to write shader code for all of these things to offload the bulk of the drawing work to the GPU, but it would provide a huge boost to performance.

OpenGL makes for fairly portable code, especially if you're doing something extremely narrow like rendering on a 2D texture canvas. It's only when you start to use exotic OpenGL features that you quickly get into trouble.


Rendering the arrange view, which can change drastically in response to user edits. We've tested using OpenGL to update, but as the texture needs to get uploaded each frame it's only marginally faster, and uses significantly more resources.


Maybe to display the VSTs. The more recent ones can be quite complex.


VSTs draw their own graphics. They're basically separate applications that have an audio/MIDI pipe to the host.


Sure, but if you're doing intensive graphics of any sort you really need to use OpenGL for compositing. Doing it at the OS layer is really inefficient.


> Luckily, changing the color profile (in system preferences, displays) to "Generic RGB" or similar disables this, and it gets the ~800MPix/sec level of performance similar to the RMBP, which is at least tolerable.

Did author try to send image with 64 bits per pixel? It should by send to GPU without converting. But If the fullscreen is 15 megapixels, than it's ~120MB per frame.


No reply, but I'm still think about it. If iMac has PCI-E 3.0 x16, It's 16GB/s throughput divided by 120MB = 136 frame per second. Of course the values are theoretical and the question is what happen in GPU.


We render everything (for portability) in 32bpp ARGB, so that'd require conversion anyhow.


I noticed this a few months ago with Perforce's p4merge (Qt-based) application and filed a Radar (28890473).

At least in that instance, the bottleneck was rgba64_image_mark_rgb32(). A perfectly fine little bit of assembly byte shuffling, but not fast enough to handle gracefully the billions of pixels/second being thrown at it.


As someone who has zero experience with this kind of thing, if someone complains "Hey, why is it taking forever to release/update [INSERT GPU-HEAVY APP/GAME HERE]", I'll just point them here.

So many data types! With such long-but-similar-yet-I'm-sure-very-differently-performing names!


I'm not sure the author is even using the GPU? As I understand it, they are using their own cross platform code to draw into a bitmap, and displaying that bitmap in a view is slow because of all the compositing and color management macOS is doing.

It sounds like they are doing all their drawing on the CPU, and using APIs optimized for accuracy; instead of drawing on the GPU using APIs optimized for speed.


How does this apply if I'm using f.lux?


As of 10.12.4 you won't need f.lux as the functionality for night shift is built into macOS.


you should do it then? f.lux is already ratcheting down gamut so you're wasting resources running 10bit color on top of it.

It's like using a color printer to print only black text documents, it doesn't make a whole lot of sense in the efficiency department.


But if I'm using the f.lux color space, I presume I can't change it to a different one without breaking it. I haven't tried it, though, but apparently it's moot with the new OS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: