Hacker News new | past | comments | ask | show | jobs | submit login
CUDA Performance: Maximizing Instruction-Level Parallelism (continuum.io)
53 points by hhuuggoo on Sept 6, 2013 | hide | past | favorite | 11 comments



Vasily's approach to CUDA really revolutionized how I think about GPU programming and I'm glad the continuum folks are giving ILP on the GPU a broader audience. Can anyone testify to the quality of continuum's CUDA wrapper? Is it nicer to work with than PyCUDA?


I haven't dealt much with PyCUDA recently, but Continuum's wrapper is interesting in that it compiles python code (or at least a subset thereof) to run natively on the GPU, via LLVM if I'm not mistaken. As far as I'm aware, PyCUDA only allows Python code to call pre-compiled CUDA kernels.


A labmate did some great work similar to Continuum's wrapper and has been continuing on now at NVIDIA: http://copperhead.github.io/ . He basically identified an ML-like subset of Python (sort of like asm.js vs js) and specializes it.

For me, the big surprise is that Copperhead departs from NESL-like flattening transformations (e.g., those used by Data Parallel Haskell.) It's a bit less surprising when you realize the creator is a GPU expert :)

Edit: Vasily, the guy behind the paper advertised in Continuum's blog post, is also from our lab ;-)


Is Bryan still working on Copperhead?

Also, do you know if the DPH folks ever managed to iron out a version of higher order flattening which gives a predictable performance gain?


I think Bryan has been doing a followup to Copperhead, probably easy to just ask him :)

I don't know what you mean by predictable performance. Flattening is a direct transformation and seems simple to reason about on SIMD architectures, though the recent dynamic schedule (work stealing) approach for multicore/distributed has the usual caveats. (I tend to avoid it for HPC.) Given the 10+ year history of the researchers involved, it seems like a slow-but-steady project..


PyCUDA lets you define new kernels by plopping CUDA source as a triple-quoted string in the middle of your Python code. It's not the most elegant thing in the world but you quickly get used to it. It seems on the surface that Continuum's CUDA is essentially the same thing sans-quotations. The semantic level of the "Python" code you're compiling doesn't look significantly more abstract than what you would find in a .cu file. But, it's not fair to judge from a few blog posts, which is why I'm wondering what peoples' actual experiences have been.


Why is the on the frontpage? This is just a copy/dumb down of the original presentation. Also, the practical use of this idea is extremely limited.

Also the poster seems to have an agenda; this is just marketing.


It's hard to see how you could have read the article given your comment. This is certainly not a copy of some original presentation. The article may not be motivated well and certainly can be critiqued on many levels.

But it describes a specific reason why using a high-level language to directly program the GPU can be extremely useful --- you can easily build, test, and iterate on execution order to improve performance. Hardware is changing, and we need better tools to write code for it.

The author uses CUDA Python, but you could do similar things with PyCUDA --- it's the emphasis on the scheduling that is the relevant point.


> Also the poster seems to have an agenda;

True. Most people who take the time to write content that they publish on the internet have an agenda.

> this is just marketing.

False. This is an informative and useful summary of a 75-page deeply technical presentation into a few screens of text, and shows actual Python code (and benchmarks) to demonstrate the principles.

> Also, the practical use of this idea is extremely limited.

Care to elaborate? A substantive discussion about the subject of the original post would actually be constructive and add value for the HN community.


I'm sorry, and you are right. What I really meant was that I dislike them not giving enough credit, thus making them look smart with Vasily's knowledge. This kind of marketing feels immoral to me.

>Care to elaborate? This really isn't the right place for that so I didn't bother. The right place would be a thread/forum talking about the original presentation. Also there wouldn't much to elaborate since my opinion was based on general insight, it is not a provable fact.


Is this a novelty account? How long has news.yc had these? Is there a policy of quickly banning these things? It certainly doesn't help the quality of discourse here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: