Hacker News new | past | comments | ask | show | jobs | submit login
ComputeSharp, Run C# on the GPU (github.com/sergio0694)
173 points by pjmlp on Feb 23, 2021 | hide | past | favorite | 38 comments



If you want less overhead, pre-compiled GPU kernel code, and a simple input/output interface for GPU computation. I wrote a similar, but arguably lighter weight solution here:

https://github.com/nick-funk/radium

It uses Cloo, so OpenCL if that's what you like. Includes examples for basic array operations as well as a full blown ray tracer so you can see how complex objects are passed to and from the GPU.

Caveat: you're writing C-esque compute kernels, not C# though.


Is it possible to control when and which variables are copied to the GPU? With solutions like this, the automatic copy of big arrays to the GPU can take more time than the kernel execution itself. I would want to use GPU programs this way, but also require low latency.


It is! ComputeSharp doesn't copy buffers automatically, and this is done on purpose to give you more control over when exactly to copy your data back and forth. You can either use normal resource types (eg. ReadWriteBuffer<T>) and manually copy data to/from them before and after running a kernel on the GPU, or you can also create some transfer buffers (eg. UploadBuffer<T> and ReadBackBuffer<T>) and use them to have more fine-grained control over all copy operations and also the allocations of temporary GPU buffers to control copies.

I'm currently writing some updated docs to go over all these details, since the ones currently on GitHub only refer to the previous v1 release of the library.


One solution would be to create persistent buffers for the data on the GPU, and then map it and write directly to those buffers from the CPU. Big possible downsides here though, since the mapped memory doesn't behave like CPU memory.

Your variables would have to be accessed via pointers, but C# has robust pointer support and the compiler could probably rewrite them (JSIL did this)


This works on DX12. There's another more mature project, that uses CUDA/OpenCL, and has CPU-only mode too: http://www.ilgpu.net/

Frankly, unless licensing is a problem, I think ComputeSharp author should just implement a DX12-based accelerator for ILGPU.


To me ComputeSharp looks a lot more accessible. I could understand how to use it instantly from the readme file. While I've gone over the documentation for ilgpu now and it mostly raises more questions.

Looks like ComputeSharp is at a higher level of abstraction, or closer to what I expect with my background.

Anyway its definitely a valuable project and I'd love to give it a run.


Ouch... that's very sad to hear :( ILGPU was developed with Cuda/OpenCL/C++ AMP (designed for GPGPU computing) in mind. Do you have any suggestions for us to improve the documentation?


[flagged]


You have a very short-sighted view of .Net development. I’d take whatever .Net tech over anything Java. With Mono, .Net, .Net Core, and Unity’s IL2CPP .Net and C# start to appeal to a broad audience.


I think you have very short-sighted view of Java. Java has GraalVM. There's simply nothing even close in .NET world. And this is just about the core of the runtime.

Java is catching up quickly. I say that as a .NET-focused dev, who touches Java once a year.


I primarily develop Unity Applications and there is IL2CPP that compiles and runs IL code as C++. Also, I’ve developed in Java regularly over the years and it’s never a pleasure. C# is a better language. Java 8 introduced Stream API and it sucks. I hate how blotted every new Java Library is in size and API smell. Everything in Java land is over-engineered and Tomcat can die in a fire.


I work with both regularly since 2002, alongside occasional C++ development, and Microsoft UI division really needs to clarity what they want.

The GUI roadmap is becoming a joke, now they are pushing for Blazor on Web Widgets as Electron alternative alongside MAUI, with a "pick whatever you like best".

This doesn't work like that when selling long term solutions to customers, specially when we still have UWP scars to take care of.

It feels that after WinRT failure to take over the world, they are unsure where to go next, and throwing into all directions to see what sticks.


Keep in mind that you're on Hacker News: the majority of the commenters will not be referring to desktop software but rather web, which is where all the focus and the drive behind all .NET Core feature development has been coming from.

I think it's fair to say that the .NET team absolutely has a vision and has made significant progress towards achieving it so far as web (frontend/backend) development is concerned, despite how ridiculously lacking the desktop UI story is/has been. (In fact, it's even worse than that since 10 years ago, .NET was the only ecosystem that had an excellent and comprehensive story for UI development that was supported from start to finish.)


Check the deep chain of comments discussing the postponing of AOT and Java interoperability for cloud deployments on .NET 6 to .NET 7, as resources were focused on making MAUI work on macOS.


Where was this?



You kind of proved my point.


Ever been to a JS shop? At least when someone hacks C# together, the math works, libraries don't dissapear from under you, and every build message does not contain "such and such developer is looking for a good job"

> "They restarted IIS instances and Windows servers regularly. Now they restart their Docker container instances."

Sound like an improvement to me :)

And okay suppose we accept they are clueless, whats with accusations of arrogance?


This is me, except I haven't looked at Docker yet. I'm just using straight self-hosted kestrel instead of IIS now.


I think the fact that it runs on dx12 is very cool, because it could potentially allow me to run gpgpu stuff on a laptop with only integrated graphics. Does anyone have any sense of whether or not that would be worthwhile? Is the parallel efficiency of an integrated gpu worthwhile to develop for?


If your algorithm is well parallelized, sure! Even a mobile GPU would likely be much faster in things such as image processing or other general purpose shaders than trying to run those on the CPU (which especially on mobile is likely not to be very performant).

If you wanted to try, you can clone the repository (make sure to be in the dev branch!) and then run the Benchmark project, which will show you some baseline difference in speed between the CPU and the GPU for some example algorithms. Let me know how it goes!


Thanks for the reply! I will check this out. My potential use case is actually audio processing, I’ll let you know how it goes!


Awesome! If you find any particular problems feel free to open an issue on the repo (or ping me in the C# Discord server). Do make sure to grab one of the latest ComputeSharp 2.0 packages from the CI though and not the ones on NuGet, those are still with the old 1.x version, I plan to update them in the coming weeks! Also remember to have a look at the various samples in the repo if you want some general reference as to what kind of operations can be done with the library. Cheers!


I thought most audio DSP/VSTs don't use GPGPU because of the latency. It is rare CUDA or similar is required, if ever.


Intel and AMD integrated graphic support OpenCL, meaning ILGPU will work on your laptop.


CUDA is NVidia vendor lock-in. While not a bad things intrinsically, it does limit possibilities for a general purpose. With this one, you can just plug whatever GPU that supports DX12 into your hardware and start creating "incoming blockchain miners written in C# in ..3..2..1"


But why is ILGPU a problem then? ILGPU is CUDA + OpenCL. OpenCL should be fine, right?


And DX12 is Microsoft vendor lock-in. To make a portable GPGPU app I'll have to use ILGPU. The only platform DX12 covers, that ILGPU does not is XBox.

ILGPU provides both Linux and MacOS.


ComputeSharp can run code on the CPU as well, through the DX12 WARP device (https://docs.microsoft.com/windows/win32/direct3darticles/di...). It's obviously not super performant compared to properly optimized code for the GPU, but it offers an easy to use fallback solution for devices that don't have a compatible GPU at least. ComputeSharp will just automatically pick that device if it can't find a suitable GPU to use, and it's also possible to just explicitly use that one instead if eg. you wanted to run some tests or something else.


The CPU part I mentioned is mostly a side thought. Risking being incorrect here, but I thought with ILGPU using CPU device means you can use .NET debugger to debug your code, which is a huge advantage.

ILGPU's main brilliancy though is support for most interesting platforms: Windows, Linux, MacOS, and maybe even Android (via opencl). Also, hopefully, Nvidia-based IoT boards (via CUDA and OCL).

DX12 coverage is way less, and only really brings XBox to the table.

I am not saying your project is bad. It is also brilliant! I would love to work on it too, if days had just over 32-36 hours instead of 24 :-) But as a consumer of .NET open source package ecosystem, it would be MUCH better not to write GPGPU code twice using one library first for Mac, Linux and Android, and then again using another library for XBox (either works on Windows).

So I am asking you to be open minded here and seriously consider joining ILGPU project.

P.S. personally, I found ILGPU a few months ago, and I am in no way part of the project (yet).


Another project is ShaderGen [0] which enables writing C# that can run on the GPU as well as on the CPU.

And there is a practical example implementing raytracing [1].

[0] https://github.com/mellinoe/shadergen

[1] https://github.com/mellinoe/veldrid-raytracer


Are there some easy-to-understand samples? I looked in the samples directory and only saw how to do bokeh blur.

I was looking for something a little simpler... I have two large arrays and I want to do simple mathematical operations on them.


I have some more samples in the other folders, and I will also include some snippets in the updated docs (need to finish writing them and include them in the dev branch). Here's a couple examples:

Super simple, complete examples of a shader that just multiplies all items in a buffer by 2: https://github.com/Sergio0694/ComputeSharp/blob/dev/samples/...

Simple naive matrix-multiply-add shader: https://github.com/Sergio0694/ComputeSharp/blob/ada133aacd29...

Let me know if those help, and feel free to ping me on GitHub or in the C# Discord server if you have any issues!



What are the advantages compared to ILGPU? Any comparative performance benchmarks?


There are many key differences between the two which can be useful in different situations, so the advantages and disadvantages of either can depend on what you're trying to do as well. Here are some general differences, if it helps:

- ComputeSharp uses exclusively DirectX as the backend, ILGPU has backends for CUDA/OpenCL/CPU. This means that ComputeSharp only works on Windows 10, but it has the benefit of not needing any other dependencies (no need to install CUDA or OpenCL to get GPU support). ILGPU instead should work on Linux/Mac too, but requires you to install those frameworks if you want your code to run on the GPU. - ComputeSharp is particularly tied to DX12 compute shaders and the HLSL language. This means it exposes many HLSL intrinsics and features, which makes it particularly easy to write/port existing shaders to it (from either HLSL or GLSL). I have some samples in the repo with some shaders ported from https://www.shadertoy.com/ to showcase this. ILGPU instead lets you write more "abstract" code in a way. - Since ComputeSharp is heavily tied to DirectX 12 APIs, it allows you to very easily to interop with DirectX and Win32 APIs if you wanted to. For instance, you can run a shader with ComputeSharp and then copy the results to some other DX resource manually, eg. to render that in a window. - ComputeSharp also exposes a number of additional DirectX types and features, such as 2D and 3D textures, and allows using custom pixel formats for your textures, which makes it particularly easy to write shaders that process images, as the GPU can take care of all the pixel format processing for you automatically. - ComputeSharp rewrites your C# code to HLSl at build time, whereas ILGPU has a runtime JIT to process the kernels. This makes ComputeSharp likely faster during the first run (as most of the work is done at build time) and with no need for dynamic code generation, but the C# code needs to follow some rules to be supported. ILGPU can be a bit more forgiving in the way it lets you write code, and it might be able to add some extra optimizations for you, like using CUDA streams.

Overall the two libraries are very different and I believe there's space for both of them in the ecosystem. The more open source projects there are for people to choose from, the better! Plus it's still a way for developers to learn more by also looking at what others are doing in any given area, and what approaches they're using to solve similar problems.

If you try ComputeSharp 2.0 out (eg. by running the samples), let me know how it goes!


@Sergio0694 thanks for pointing out the differences!

I fully agree that both libraries use different approaches to realize .Net code execution on the GPU. Correct me if I'm wrong: ComputeSharp is a source->source translator for C# to HLSL (as you said) and ILGPU is an IL code->assembly just-in-time compiler for .Net programs (similar to the .Net runtime itself; except for the current implementation of the OpenCL backend...). ILGPU is designed to implement GPGPU programs highly efficiently on GPUs while having the ability to emulate all functions on the CPU for debugging (use case: HPC-inspired GPGPU computing workloads). It also includes a variety of different compiler optimizations specific to different GPU architectures. The primary focus has been to achieve high performance on NVIDIA GPUs that is comparable to "native" Cuda programs.

However, I also think there is definitely enough room for both libraries in the ecosystem (as they also target different groups of developers/users... :) ).


You are correct: the way ComputeSharp works is that there is a Roslyn source generator that rewrites the shaders at build time from C# to HLSL (mapping types, methods, intrinsics, etc.), then shaders are compiled and cached at runtime (this gives the library some additional flexibility, like being able to have shader metaprogramming too, ie. capturing delegates in a shader to compose them dynamically) and then dispatched on the GPU.

ILGPU basically does more things to just make your code well optimized from what I can see (with its full JIT compiler, like you mentioned), whereas ComputeSharp is more about letting you write a "C#-ified HLSL shader" that is then run as is. So you could say it's less forgiving but might give you more control (as in, you can basically port GLSL/HLSL shaders directly to C# and run them in ComputeSharp with almost no code changes (see https://twitter.com/SergioPedri/status/1363869460793335811 or https://twitter.com/SergioPedri/status/1364210592760872960).

Also as I mentioned before, which is one of the key differences (which is cool, I love that the two libraries have a very different structure and key features!), ComputeSharp is tightly coupled with DX12 APIs, which gives it some extra features like textures and automatic pixel format conversion, and makes it very easy to interop with other DX stuff (eg. to use a swap chain panel to render a shader to a window, like I do in one of my samples).

Goes without saying, I think ILGPU is a really cool project! I will definitely need to find the time to have a more in depth look at it, I'm sure there's plenty of cool things I could learn from that.

Cheers!


Very cool, good job.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: