This seems pretty cool, and I'll probably play with this at some point, but sadly literally all of my GPUs are AMD or Intel at this point.
I'm sure you had a good reason, so I'm genuinely curious to why CUDA was chosen instead of something like OpenCL?
(I'll add my typical disclaimer that I'm not saying this as some passive-aggressive way to criticize; I'm genuinely curious to the reasoning behind the choice.)
Early on when we first started playing around with General Processing on GPU's we had Nvidia cards to begin with and I started looking at the apis that were available to me.
The CUDA ones were easier for me to get started, had tons of learning content that Nvidia provided, and were more performant on the cards that I had at the time compared to other options. So we built up lots of expertise in this specific way of coding for GPUS. We also found time and time again that it was faster than opencl for what we were trying to do and the hardware available to us on cloud providers was Nvidia GPUs.
The second answer to this question is that blazingsql is part of a greater ecosystem. rapids.ai and the largest contributor by far is Nvidia. We are really happy to be working with their developers to grow this eco system and that means that the technology will probably be CUDA only unless we somehow program "backends" like they did with thrust but that would be eons away from now.
> We also found time and time again that it was faster than opencl for what we were trying to do and the hardware available to us on cloud providers was Nvidia GPUs.
Were some benchmarks done perhaps or could you provide some more low-level reasons as to why CUDA was more performant? I'm not experienced with CUDA, just generally interested.
I also have to say that I am a bit skeptical of Nvidia as I have never received any proper support for Linux development on Nvidia GPUs for drivers and generally tracking bugs on their cards. It was so frustrating that I just switched to AMD GPUs that "just worked". How is this different for these kinds of use cases? Does Nvidia only care about their potential enterprise customers but they don't care about general usage of their GPUs on Linux? It seems to rub me the wrong way and I don't understand.
Nvidia loves and cherishes you (I think I don't work there). They want you to be able to do this on your laptop, your server, your super computer.
If it has been a few years I would encourage you to get your feet wet again because support has gotten alot better. It's not like 5 years ago when it was nigh impossible to get the driver installed and weird conflicts would come up. I generally recommend using the debian installer if that works for you. Rapids is meant to make data science at scale accessible to people. If you have trouble with CUDA drop by the https://rapids-goai.slack.com . There are many people there that are willing to help.
Do you use Nvidia products on Linux? Reading "love" and "Nvidia" in the same sentence feels a little bit odd because the general sentiment for Nvidia on the Linux community is "don't touch it with a 10 foot pole". If I remember correctly Torvalds himself named it the worst hardware company they had to deal with.
I'm not sure what you're talking about. Besides games, using CUDA on Linux has been the de facto OS for anything serious for almost as long as CUDA has existed. What exactly is the problem with it?
I think this sentiment exists solely among people that don’t actually own any NVIDIA hardware. I‘ve never had any problems with their drivers, any crashes in video games can be usually be attributed to be at least in part the Games fault. In contrast to Windows Linux has abysmal support for restarting crashed video drivers.
Linus Torvalds's kernel developer point of view might be very different from the majority of users'. For the end users, they just need to install Nvidia's proprietary drivers and everything just works.
For a long time, Nvidia was the best option for 3D graphics on Linux. ATI/AMD had terrible drivers (fglrx/Catalyst), Intel had abysmal performance.
The proprietary drivers are pretty nice and performant and have been for a long time. The same can’t be said about Intel (they don’t produce comparable hardware) or AMD (until recently their drivers were garbage, at the moment their best graphics card is worse than the best NVIDIA one)
With nvidia-docker (multi-year effort at this point) and AMIs, esp. the era of ML, this is a non-issue for 80% of our users. The other 20% struggle even without the GPUs. ML is a thing and GPUs run it, so the community has come together here.
Linux laptops remain a mess in general tho, which is annoying for non-cloud dev =/
Well it pretty much always was a part of the eco system it just was not open source. We have been contributors to rapids for a while. And yes, we are betting on Nvidia for sure.
For most people building GP GPU solutions they are going to have to make a decision when it comes to which hardware they want to support. After that decision is made it really isn't something you can revisit without copious amounts of money.
So, the part that confuses me with this argument is we live in an Intel world where they have 98% market share in servers. So we're already at the whim of a single company. Why not challenge that dominance?
Not the same. Two companies make x86 processors, and in the very specific case of this article/comment thread, more than one company supports OpenCL. Nvidia/cuda is a one-pony show, no matter how you look at it.
That seems like a pretty good reason...I have been looking to learn some GPU programming to optimize some matrix math that I've been doing for a pet project, and while my first instinct was telling me OpenCL since it's portable, if people who actually know what they're talking about are saying that CUDA is simpler to start with, it might be worth it to me to pick up a cheap Nvidia GPU/Jetson Nano and do some processing that way.
Even if you choose OpenCL, the tools (profiler, debugger, etc) are usually platform specific. In addition, my experience with opencl across platforms was that each of the vendors' compilers had distinct issues and that performance was not portable.
I get the appeal for an open API, but opencl never grew a development ecosystem or any libraries. IMO it is dying and isn't worth the effort. AMD is implementing CUDA with hip - maybe roll with that.
You definitely do not want to use opencl for matrix multiplies on Nvidia cards. That's the most highly optimized task on GPUs, so much so that they have dedicated hardware units for it. Opencl cannot take advantage of those.
The driver api is very close to the opencl api and is very low level. Most people use the CUDA runtime api which is vastly more convenient. The main difficulty with OpenCl and the driver api is that you have to manually load GPU code onto the device which then returns a handle. You generally have to load the code onto every device which means multiple handles for the same function. This makes executing kernel quiet a lot of work. The runtime api does this all automatically which make programming with CUDA quiet easy since launching a kernel is basically a function call. The CUDA rutime also automatically handles context creation which is another time saver.
When I first learned OpenCL I was shocked at how difficult is was to simply write a simple vector add program since there was all this additional code loading, creating contexts, etc. The setup / boiler plate was greater than the actually code itself.
It basically boils down to convenience in my opinion. Couple this with the fact the NVIDIA generally has the most powerful and energy efficient cards and it's no surprise they took the market.
> The driver api is very close to the opencl api and is very low level.
They are only realistically comparable from OpenCL 2.0 onwards. But no NVIDIA card supports anything beyond 1.2, and with that decision they basically killed OpenCL.
https://www.codeplay.com/products/computesuite/computecpp
Enable SYCL for all openCL devices so Intel, AMD, Nvidia, FPGAs, a lot of things and smartphones which is order of magnitude more devices than CUDA.Products targeting only nvidia devices are mostly niche markets which is pathetic.
As for debuggers codexl has been extended to support it.
Except SYCL there's Open{MP/ACC} gpu offloading which become viable and portable.
There's also HIP/rocm which transpile to openCL AND CUDA (best of both worlds?)
And can transpile CUDA to HIP almost totally automatically.
That's how AMD ported tensorflow to openCL.
iOS with its ageing OpenCL drivers, or the new Metal Shader drivers?
Or Android, which Google rather uses their own languages, Renderscript and Halide?
Yes some OEMs do happen to ship non standard Android drivers that also support OpenCL, which require vendor specific SDK to be actually usable, thus not an option versus Renderscript or Halide.
Do you happen to actually know CodePlay? They got their name creating compilers with vectorization optimization for the PS3 and other game consoles.
Their ComputeCpp is a pivot into the GPGPU world and their aren't doing the community edition just from the kindness of their hearts, rather as path into their products.
"If you want to do things with this release, be prepared to be a pioneer. This release is pre-conformance, which means that we do not implement 100% of the SYCL specification. We currently only support Linux and two OpenCL implementations, by Intel and AMD, but wider support is coming. You may find that some unsupported implementations of OpenCL work with ComputeCpp. That's great, but we don't officially support anything else (yet). Most of the open-source libraries being ported to SYCL are not completed yet. This means that you should only check out some of these projects if you want to do some development yourself. We are building a big vision here: large, complex software highly accelerated on a wide range of processors, entirely by open standards. So, please be patient, or work with us."
Feels like it still needs to mature a little bit.
Even Intel, despite their SYSCL contributions to clang (experimental release last 31st July), has been developing in parallel their own extensions, Data Parallel C++, that no one knows in what form will they contribute back to Khronos, if at all.
Meanwhile CUDA has been developed to be language agnostic from the get go, with out of the box support for C, C++, Fortran. Now with Julia, Haskell, Java, .NET support as well.
While Khronos kept banging the C is good enough message until it was too late for vendors to actually care about SPIR-V.
Have you compared performance between your suggested solutions and what can be achieved using hardware vendor platforms? If not then whats kind of pathetic is how quickly you dismiss the people above who say the HAVE done this before.
If you have seen something we have not when it comes to performance then please by all means share it so we can learn!
This is great. The BlazingDB guys are awesome and now that the project is open source this is another good reason for my teams to experiment with different workloads and compare it against a SparkSQL approach
We worked with the team early on it. In turn, that means it's inside one of the powertools at gov, bank, etc. teams, even if most of the users don't quite know what a GPU DB is :) We do GPU visual graph analytics over event data (security, fraud, customer 360, ...). We use for a bunch: interactive sub-100ms timebars, histograms, etc. Any full-table compute stuff you'd do in pandas, sql, spark, etc. Any UI interaction like a filter can trigger tons of queries, and w/ GPUs, that means they can quickly compute all sorts of things.
The reason Graphistry picked BlazingSQL is it fit in as part of our approach of end-to-end GPU services that compose by sharing in-memory Apache Arrow format columnar data. When the Blazing team aligned on Nvidia RAPIDS more deeply than the other 2nd-wave GPU analytics engines, it made the most sense as an embedded compute dependency. Going forward, that means Blazing can focus on making a great SQL engine, and we know the rate of their GPU progress won't be pegged to their team but to RAPIDS. A surprise win over just cudf (python) was eliminating most of the constant overheads (10ms->1ms / call), and looking forward, seems like an easier path to multi/many-GPU vs. cudf (dask).
We should share a tech report at some point - bravo to the team!
Looks like an good way to do analytics on the GPU. The Python API is clean and simple.
The premise is that GPUs will accelerate columnar data analytics. And, with "Dask" [1], you can run those worldloads on a cluster.
I wonder if careful indexing on initial write would outperform this system. This system looks like it's best when you have totally raw, unindexed data. Perhaps a future thing to do is to generate a side index during initial column scans to speed up future queries?
Also, GPU memory is pretty expensive. How does the total-cost-of-ownership compare to just running on RAM with powerful multi-core CPUs? There's like 512-bit vector operations these days.
GPU memory is expensive but a big as #@$% computer is even more expensive. When we show comparisons to things like spark we are doing so use cost basis. So if we say something like we are x times faster than this technology on this workload what we did was launch clusters that have similar costs. Total cost of ownership is also reduced by the fact that the engine itself is totally ephemeral. You can turn it off and on within seconds.
What kind of benefits does CUDA bring to databases? I've never heard of running a database on a GPU before. Couldn't find anything on their homepage other than comparison with a few other db options
This is a Distributed SQL engine not a database. We store no data. You store your data in HDFS, S3, posix, NFS etc. We allow you to query directly from these filesystems of the file formats you have already. You can look here to see the file formats cudf supports. https://github.com/rapidsai/cudf/tree/branch-0.9/cpp/src/io
Greatly increased processing capacities. We can just perform orders of magnitudes more instructions per second than a cpu with the gpus we are using.
Decompression and parsing of formats like CSV and parquet happens in the GPU orders of magnitude faster than the best cpu alternatives.
You can take the output of your queries and provide it to machine learning jobs with zero copy ipc and get the results back the same way. We are all about interoperability with the rapidsai eco system.
Is there any reason why a SQL format isn't is that list? Wondering if there's a way to join SQL sources with file storage sources. An example of this would be filtering or enrichment operations.
When you say SQL format do you mean being able to read the output of a jdbc or odbc driver?
If this is the case then mostly just time. You are not the first person to ask about this and now that there are java bindings in cudf this might become easier to make a reality in the next few months.
Or do you mean being able to read a database's file format natively?
If this is the case there are many reasons.
1. There are many poorly/non documented formats
2. Even if you decide to read some other DB's format natively, those formats change over time
3. Little control of how and where the data is laid out
I've read the website, but I could't find a hint that the engine is distributed. Even the spark benchmarks compare a single instance with multiple nodes.
Is it distributed? How do I set it up in a distributed mode?
Does it support nested parquet (something that even spark itself struggles to support inside SQL).
In summary, you get snappy, interactive query speeds on large data sets. I've ran that locally and the results are pretty amazing compared to Postgres or even Tableau in-memory.
OmniSci transparently caches data across the memory of the CPUs and GPUs on a server, so after the initial read, it is likely that the data for subsequent queries will be in memory.
We've also optimized our storage formats and multithreaded our disk reads, such that we can easily hit many gigabytes per second on flash storage. Plus, new persistent memory technologies like Intel Optane will enable even more instant reads from "cold" storage.
CUDA by itself brings easy-to-run parallel algorithms.
It's not of much value for databases unless you have a proper infrastructure set up to use it correctly. Same is true for columnar aspects, for example.
People have been building columnar databases to do analytics quickly. GPUs (with CUDA) can run analytics operations (think join, group by, math, sorting) on columnar data in a much more efficient manner. They're designed for operations on vectors, which columns are.
We've been doing this ourselves too with SQream DB: https://sqream.com. It's an enterprise data warehouse with GPU acceleration. We use CUDA exclusively too.
PG-Strom is a GPU accelerator extension for PostgreSQL which has been around for a few years now. I have not tried it myself... http://heterodb.github.io/pg-strom/
PartiQL is a query language, based in SQL and extended to be more natural with unstructured and nested data. It can be used with various database and querying engines.
BlazingDB/SQL is a querying engine, more similar to Presto or Apache Drill, and specializes in using GPUs for processing power.
Yeah, we were totally ignorant on PartiQL until your post. Although now looking at it, looks boss! Totally agrees with many of our theses, and there looks to be a lot to glean from this project as well.
This is an interesting comparison I wouldn't have thought of. But yes, PartiQL does have a similar feel to this announcement, though without GPU-acceleration, the processing speed might be several orders of magnitude slower
I'm sure you had a good reason, so I'm genuinely curious to why CUDA was chosen instead of something like OpenCL?
(I'll add my typical disclaimer that I'm not saying this as some passive-aggressive way to criticize; I'm genuinely curious to the reasoning behind the choice.)