The idea of adding a graphics API to a language standard is incredibly misguided. There are plenty of 2D graphics APIs already from SDL to one in QT, GTK, Cairo, Canvas, and then the more proprietary ones. People can not seem to stop creating them.
Perhaps people thought if a GFX API were incorporated into the C++ standard it would replace all the others, but... Can anyone imagine that actually happening? Really?
I'd be happy if languages would adopt Vec2 Vec3 and Vec4 as standard data types - both float and double. I understand there are ways to create these in most any language, I just want them standardized so we can all write code the same way and get the performance of some vector instructions without using intrinsics.
If they can't even standardize the obvious (fixed length) vector types used in graphics (and other things) they really shouldn't try to define a graphics API.
I'd be happy if languages would adopt Vec2 Vec3 and Vec4 as standard data types - both float and double. I understand there are ways to create these in most any language, I just want them standardized so we can all write code the same way and get the performance of some vector instructions without using intrinsics.
Yes. Over 15 years ago I proposed a library for Vec2, Vec3, and Vec4 to the C++ standards committee. It's the one from Graphics Gems, but rewritten as all inline.[1] The C++ standards people wanted more usage of templates.
What the C++ (and C) stadards could use is a native type for SIMD vectors. In other words, standardize the GCC vector extensions [0] or some variant thereof.
Just adding a new header-only library for vector arithmetic isn't something that needs to be in the language standard (library), there are lots of options for that already (such as glm[1]).
I've been using vector extensions for my 3d graphics and physics stuff for years now and it works great. There are some pain points, but it already covers most of what I need. As a bonus, it can be used in plain C and you get all the usual math operators (+, -, /, etc) for free. There are minor incompatibilities between GCC and Clang, but nothing a few #ifdefs won't solve.
Here's a quick introduction in a few lines of code:
typedef float vec4f __attribute__((vector_size(16)));
vec4f a = { 1, 2, 3, 4 }, b = { 5, 6, 7, 8 };
vec4f c = a + b;
vec4f d = (a + b) * c; // with --ffast-math, you will get fused multiply-and-add (MAD)
In my projects, I've always put together an ad-hoc collection of arithmetic routines I need (e.g. dot, cross, matrix product, inverse, etc). A proper, well tested library of basic math operations for SIMD vectors would be very useful but I haven't got the time it takes.
If anyone is interested, some of my math routines are bundled in this github repo [2]. It's incomplete, undocumented, not well tested and there are known issues I haven't gotten around to (e.g. quaternion products give wrong results).
If anyone thinks this is useful and is interested in contributing/collaborating, leave a comment.
There are some efforts underway to standardize using SIMD but this is really a separate problem to providing a short vector math library suitable for the kind of linear algebra useful for 3D graphics and physics.
Making your vec4f type a SIMD register type is not a good way to make use of SIMD hardware. The best approach is generally an SoA (Struct of Arrays) rather than an AoS (Array of Structs) and the best CPU programming model for that is something like ispc[1].
On some platforms alignment issues become a big problem for naïve vector math libraries too, though this is less of an issue on current generation x64 hardware.
This is exactly what I'm talking about. I want these vectors as standard types. First class data types, not part of a library or typedef. The example given in the parent comment is spot on with typical use.
I like that GCC defined their own way to declare these that is independent of x86, ARM, and Altivec intrinsics so your code is portable (so long as you use GCC). Geometrically relevant vectors are common enough to be native types, let's get there.
> There are plenty of 2D graphics APIs already from SDL
SDL2 doesn't have a 2D graphics API in the traditional sense. It has a basic bitmap software blitter designed for debugging and bootstrapping. SDL2's main forte is just a being a cross-platform wrapper around the HW-accelerated APIs like OpenGL and Direct3D.
> to one in QT, GTK, Cairo
While Qt does indeed have QPainter, it is switching away from QPainter towards OpenGL with QtQuick. GTK+ uses Cairo (with plans to switch to OpenGL/Vulkan as well). So so far, our list is down from 4 to 2.
> and then the more proprietary ones
I'd list it as: GDI+ (Microsoft), Quartz 2D / CoreImage (Apple), Skia (Google) and Cairo (X.org). HTML5's <canvas> API is taken from Quartz but is often backed by the others.
So there are not that many choices for a modern 2D vector graphics library. My main issue with the 2D proposal is that hardware acceleration and modern graphics just flat out doesn't work the way the 2D API expects it to. You can be very good at pretending, but at some point you'll have to grow up, put on your big boy clothes, and use learn how GPUs work. Hence why there's the heavy migration away from the others. Direct2D has a good example here of a good API, even if the implementation is a bit wacky. Pathfinder 2 is similar.
Indeed it does, and you can use it to draw accelerated scaled and rotated images, lines, and rectangles. There are even a few blending modes available. It's a nice foundation for simple cross-platform 2D graphics, it's come a long way since SDL1.
SDL2's render API can use GPU acceleration, however its feature set is rather limited. This is plenty for many, however is incomplete for others. I've heard its feature set be referred to as 'SNES-level graphics', or something to that effect, from at least one SDL-internals dev, something which I'd also echo.
I'd like to see a reasonably-standard C++ API specification for at least a few graphics-related things (and perhaps more over time), but am not sure if writing a graphics API with the sole goal of eliminating all other APIs or libraries, would make any sense. The people with whom I've talked to, some of whom have done work on P0267/io2d, don't seem to be explicitly aiming for this either, but perhaps I've missed something.
I'd be happy to see vec2/vec3/vec4 style APIs in the C++ standard, too. P0267/io2d does have some support for this, although I think it could certainly stand to get some love + attention!
From what I heard in some c++ podcasts the goal isn't creating ultimate graphic API, it doesn't need to be usable by proffesionals. It is aimed mostly at beginners and people teaching C++.
Having single interface for things like Vec2 was also consider. There are hopes it could be make mixing and switching third party graphic libraries easier by having common types.
That sounds a lot more like what they should aim for: Formalizing algebra.
It's probably a far safer bet that algebra won't be changing very often, much less so than any graphics pipeline!
I think they would be wise to think a little beyond beginners though. Algebra does not have to be always complicated, but it definitely needs to be open-ended to extend its usefulness.
The standard has been burned by short-sightedness before.
Before they added std::string, there were plenty of string APIs already. People couldn't seem to stop creating them.
A graphics API won't replace all the custom-made graphic APIs, just as std::string didn't replace all the custom-made string libraries. Nevertheless, it's great to know you have something readily available.
I appreciate the work a lot of people put into this proposal but I'm actually glad it isn't moving forward. I don't think there's an example of a graphics API which is really good across a wide range of use cases from beginners to demanding professional use and I'm not sure it's good for the language to standardize on something that say only solves the beginner use case. This is really an area best left to libraries outside the core language standard currently I think.
The push for a standard 2d api in C++ seems so bizarre to me. There's a place for languages like python that are designed around quickly prototyping things, but this has never really been C++'s domain. Programmers for most of the esoteric, resource constrained platforms where C/C++ really shines don't have much need for a 2d api, or will probably want to write their own. Meanwhile, popular libraries already meet the needs of most of everyone else.
Is there a genuine use case for a language-level standard 2d api? To me it seems like it is being pushed as a standard just for the sake of a checkbox, but am I missing something here?
One strong argument at the evening session was that not everyone has access to non-standard libraries, which in my opinion is a ridiculous argument. If your work doesn't allow non-standard libraries, then it's time to move on, we really should not feed the companies with the "not invented here"-syndrome.
Another argument many people brought up is that it would allow C++ to be more usable as a teaching tool or a few shared their stories of basic graphic programming which eventually landed them in C++. But I really don't see why the standard needs a toy API. If people pick C++ as their learning language, they should be able to learn how to use another library, since that's very much part of learning C++. Besides there are more complicated concepts in C++ than having to follow some dead simple tutorial on getting another library working. And if you go down the teacher route, then it's really no issue setting up a library for those learning.
And finally there's the argument of "other languages have it too". Yet, I haven't really seen examples of that. I think .NET has something like that, but then again .NET also has a full API for window creation and tons of other things. Maybe a simple drawing API is a good idea, but first we'd have to add other things to the standard.
All in all and seeing the evolution of OpenGL, I simply don't see a reason why the C++ standard needs a simple drawing API right now. There are tons of other things that are way more important and should be added first. And once you get to a standard drawing API, what good is it, when you still have to write custom/platform specific code just to get an actual rendering surface (aka window) on screen?
And one final point is, who needs a API that is designed for a software renderer? What year is and who is still running software renderers?
The authors of the recent, P0267 C++ graphics paper (aka "io2d"; latest revisions getting posted to https://github.com/mikebmcl/io2dts/tree/master/papers), as far as I've seen (online, mostly), aren't interested in making a "toy" API. Many, if not all of them, want people to be able to do 'real' work with it. They've also seemed willing to accept constructive feedback on it, and are cognizant that there is a good amount more work to be done, if something useful is to be a result!
There is, I think, room to work on different C++ features in parallel. A graphics expert isn't necessarily going to want, or be qualified, to lead a charge on all future, non-graphics features.
In terms of 'other languages have it too', here's one platform/standard that has it: HTML5. It's Canvas API is pretty widely available, although in practice, perhaps only with sufficiently powerful runtime-processing power.
Regarding software renderers, I'd argue that these are still useful, in some use cases. I've worked on some projects whereby a constexpr/compile-time canvas-style implementation would have been useful, I imagine.
Since converting a number to a string still requires a lot of boilerplate in C++, sorting a container still requires typing the name of the container twice, and checking if a map or set contain a key requires lots of code, I can only imagine drawing a pixel is going to become something like this:
Converting a number to a string: std::to_string() available since C++11. Sorting a container will be sort(container) with the Range V3 proposal which is getting close to standardization. C++20 introduces a contains() member for map and set.
Cool! Been using C++11 for so long now and never knew about std::to_string.
Heard about the ranges many times already, I think they got some delays. What's wrong with functions in the standard library that simply take a container and call .begin() and .end() inside of them though? No such complex thing as ranges needed for that.
Contains sounds nice, didn't know about that one yet either.
It's looking likely that ranges will make it into C++20 according to the author of the proposal Eric Niebler: "The entirety of the Ranges TS with the addition of lazy adaptor pipelines is now all but certain to be in C++20."
Range proposal has been there for a long time unfortunately it got defined using concepts mechanism http://en.cppreference.com/w/cpp/language/constraints . Concepts is a lot more complicated change. This mostly affects wording in standardese text and potentially quality of compiler errors. It could be possible to have ranges without concepts but that would require rewriting most of the proposal. There is even an existing implementation which is c++11 compatible.
My understanding is that ranges are pretty simple to use, a nice generalization and cleaning up of STL. Pretty much just follows through on all the implications of "functions that simply take a container and call begin and end on them".
I think people disregard or forget the new programmer. Some of them are coming to C++. The question starts with how do I draw to the screen. The answer, well you download <pick a library>, compile it, add it to the include/linker paths, and then it should work. But, one thing is, if it isn't in the standard it does not exist.
A simple set_pixel/get_pixel onto a surface(real or just a buffer) is pretty powerful. it allows expressing data in much better terms than cout/printf for many domains. I remember the joy or displaying my first sine wave.
I think the current proposal was too heavy and I think that a minimal subset where one can draw pixels, lines... onto a surface is all that is needed for now. This gives a standard way of expressing that idea that allows further abstractions in the future. But for now, I want an easy way to visualize something.
Better to focus on improving the situation with cross platform package managers and build systems for C++ IMO to simplify the process for beginners getting access to those third party libraries in their projects than trying to integrate graphics into the standard.
There is some progress being made on package managers for C++ but things are still pretty fragmented and not very beginner friendly.
Agreed times a million. I'm still blown away by how there is not a common npm-like C/C++ package manager. I know there are a few that are trying to make it happen, but still... Its really the widespread adoption that makes these things valuable.
I don't think it is an either or. But being in the standard really aids in discovery and provides an interface to allow it to work just as well for everyone.
Ad for package maanagers, first you have to get everyone to agree on an approach.
So the first question is binary, source, or either. Then figure out if it is distributed(e.g. specify the host in the dependency) or centralized.
For me I would something that is mixed binary/source and distributed. I do not want to be required pull binaries from a central host or require a central DB of locations. It would be good to have, just not mandatory.
Then, like you said, the build system. I am not sure that this needs to be any one system. If you look at CMake external projects it is close to being really good with allowing most source control systems/url's. What it lacks is the ability to use versioning(such as git commit/release and per user/per system caching) It also does a git check on each build.
But specifying dependencies like Carthage, used for Xcode, would a nice starting point
And why should the standard be encumbered with a toy API that's only useful for beginners (it's not like the rest of the standard library is exactly beginner friendly).
And it's a bottomless pit, what use is drawing alone if you can't interact with the program? If you draw something, you need a window to draw into. You need a portable application entry because the standard main() is useless (think of Android, iOS and HTML5 via wasm). You need some sort of window-management. You need at least mouse and keyboard handling (and also consider devices which only have touch input). Want to render a hiscore? Now you need text rendering too. A button to start the game? Do we add a complete UI system like Qt?
And that's just for getting beginners to write a little interactive toy program, all of this will be fairly useless for 'real' applications.
Sorry to be blunt, but the whole idea was rubbish to begin with.
> But, one thing is, if it isn't in the standard it does not exist.
I am not sure it's true. There are plenty of graphics libraries, including ones with their own standards (like OpenGL), which not only exist but are quite popular and have tons of code written against them.
If they're going to add some big new module to the standard, it should be networking, not 2D graphics.
Edit to add a little more rationale: networking is something quite a lot of programs need; it always needs OS support, but different OSes have different APIs that do basically the same thing. It's fairly easy to pick a common subset that covers 95% of use cases and there would be huge value in standardizing it. I would definitely use it.
2D graphics is something that not many programs need, and that can already be done in pure user code (apart from actually displaying it on the screen). It's hard to pick a useful common subset -- what kinds of curves do you support? How about dashes, outlines, shadows? What about color spaces? What about displaying to the screen, what about printing? It would be a huge mess. Even if it were standardized I'm not sure I would end up using it (especially if it were as awkward to use as much of the STL).
I agree that having a standard with a common subset of widely-used features would be of value, however, I'm suspicious of the claim that it could cover "95% of use cases". Are there some specific use-cases that you have, or had in mind?
> Several thousand hours of work has gone into this proposal, and we need a way of preventing this from happening again.
To be fair, there were quite a few voices against it, but they were mostly ignore if not belittled by telling them to write a paper or be quiet. [1]
If you only take people's opinion serious when they operate in a committee manner, then you shouldn't be surprised that despite your invested time, people eventually will rise up and stop the work.
Also just because some people invested a lot of hours doesn't simply satisfy the condition to force something in the standard that is still far away from finished. Sunk cost fallacy anyone?
I got to start a begin-with-an-empty-buffer project last year and chose c++17, treating it as a brand new language as if I'd never used C++ before. If you start with modern constructs and work backwards (i.e. ignoring legacy constructs and ideas that haven't aged well, like a "pure O-O" approach or iostreams) it's actually a pretty clear and expressive language.
Another advantage with the 'empty buffer" is we could compile right away with almost every warning enabled (except for some 3P libraries we use) which also permitted the compiler to make some very aggressive optimizations.
Unfortunately most people don't have this luxury :-(. But from this perspective I've generally been happy with the work the committee's been doing recently.
What resources did you use to learn the new features of C++. I don't know there language, but I began starting to learn it, and I don't know if what I'm learning is the nice new "modern constructs" or the crappy old ones.
I can't stand videos so if you can't use pjmlp's advice, do start with the tour of c++, then read the summaries of standards compliance for gcc and llvm (which are on their respective websites) which will suggest which features are new and worth reading up on.
I also read the standard (well, the "almost final" standard, which is actually the final one but which could be published on the net because it was what was voted on to become the standard -- the standards themselves are copyrighted and can't be posted to the net!) but unless you're used to reading language standards it may confused more than it answers.
The blogs of folks like herb Sutter also explain a lot of subtle points and best practices.
The reason to have a 2D graphics library as part of the C++ Standard is to allow one to write a C++ program that does graphics such that it has only one dependency: an implementation of the C++ Standard. It won't be for everyone and certainly won't affect the ability to use any other graphics library. This is a good thing, IMHO. Same for more complete access to the file system, something that has also been missing from C++.
And so marks the end of the std::graphics era. I originally thought it was a good idea, but now agree that a graphics API like that has no place in standardisation.
For the Sciter project ( https://sciter.com ) I did exactly that: created abstract graphics C++ library that wraps all supported backends:
1. Direct2D and GDI+ on Windows;
2. Skia on Windows, MacOS and Linux;
3. Cairo on Linux/GTK;
4. CoreGraphics on MacOS;
So I hope I know the subject, and here are few comments:
Biggest problem.
Two significantly different drawing models:
GPU based libraries (Direct2D/DirectX, Skia/OpenGL): these use batch rendering - calls like graphics::fill_rect(...) just produce record in command stream that gets sent to GPU for rendering. Kind of asynchronous rendering.
CPU libraries - rasterization of graphic primitives to a bitmap / framebuffer. These are Cairo, GDI+ and CoreGraphics.
While GPU and CPU ways of rendering can be wrapped into single, unified interface but there are nuances.
GPU libraries update target surface in full. Even when you need to change one pixel - you send full batch to GPU to render whole thing all together. (there are optimizations in that but in 99% that is that way)
CPU libraries update just that pixel on bitmap. But when you need to update whole window on high-DPI monitors ...
Number of pixels on 300ppi screen is 9 times larger than on 96ppi screen. Our current CPUs and buses are still from "96ppi era". GPU is the only option with high-DPI.
Conceptually C++ 2D library can be built around single GFX backend function similar to NV_path_renderer (
https://www.khronos.org/registry/OpenGL/extensions/NV/NV_pat... ). All 2D primitives can be simple wrappers of NV_path_renderer calls + blend_bitmap().
But there is another huge problem - text rendering. Just few keywords: LTR, RTL, TTB text layouts. Fonts discovery and substitution. Different font formats. ClearType and other antialiasing technics tied with monitor architectures. That's too big and I am not sure about the size of 2D specification that will include all needed details.
I've been playing in the space too, and basically agree with what you say but have some other observations.
1. The "fancy" operations like the blending modes (I believe these were originally from Photoshop and are now part of PDF, among other things) are definitely doable on GPU, but many of them require shaders and cannot be done in the fixed-function blend pipeline.
2. There are a bunch of optimizations that are still very useful on GPU, including reordering to improve batching of draw calls, and techniques to reduce overdraw when an operation will be completely painted over. Webrender uses hierarchical z-buffer to GPU-accelerate these.
3. There are advanced things like linear-sRGB colorspaces that are very useful, well supported on modern GPU hardware.
4. CoreGraphics on MacOS is pretty slow. On Windows, I've had varying results with Direct2D (I know we chatted some in a previous thread). I think it depends a lot on the quality of drivers.
4. Incremental repaint (as opposed to painting the surface in full) is still useful, if for no other reason than to reduce energy use. It's difficult on some platforms (Mac), well supported on others (Windows), and optional/driver dependent on others (Linux/Android).
5. Clipping is still where much of the performance dies. Sure, you _can_ specify everything in terms of bitmap blending, but do you want to? That's an awful lot of gmem bandwidth you're churning.
And yes, text is fantastically complicated.
Long story short, I don't see any good one-size-fits-all solution here.
I work in 3D (and 2D) graphics primarily in C++. I don't think the standards committee can come up with anything that'll replace the GPU-accelerated APIs (there's already the Direct3D 12-vs-Vulkan split that's really unlikely to have a clear winner), however, I'd take simpler, CPU-only libs for common tasks:
* Vector and matrix math -- vec2d, vec3d, conversion to quaternion, a matrix library that doesn't suck, overloaded operators, etc. I dabble in Python, always have, and numpy isn't the best lib out there (some things are oddly named, I'm missing the operators, etc) but it's more complete, more usable, and better documented than any lib I've used in C++.
* Basic image manipulation. Like "write bitmap image to PNG" basic. This will never get used for anything serious but will be useful for toy projects/debugging
* Basic intersection tests. This is 2018 and I had to write some variant of (basic geometry primitive) intersects (other basic geometry primitive) using formulas most people probably haven't seen since their SATs 5 times since the beginning of the year, because I at least know the formula by heart by now, whereas all the aforementioned libraries are too verbose, not documented, require linking a huge library, are header-only and will bloat your compilation times by a factor 10, etc.
I agree that such a library should steer well clear of text rendering. Text rendering is really hard, has weird edge cases, and the people who do it have their own favorite library already. But the other stuff I mentioned, that's stuff already present to some extent in the "mainstream" C++ libraries, like boost and Qt.
I think such a library should definitely not be 2D-only. Tons of libraries go that direction (for example Canvas in HTML5) because 2D is easier if you have no hardware acceleration, but even in the 3D space, there are algorithms where usability>performance and currently the people writing those algorithms are copy-pasting left and right and spending hours writing unit tests for high school math.
So I read the 2D graphics prop, P0267[1] and this summary[2], and it seems to suffer from most of the problems that caused, e.g., HTML5 Canvas to be superseded by WebGL whenever it became practical to do so in most browsers. (And Canvas had the solid advantage of having no competitors. C++ has Qt and Cairo and boost and about five hundred game engines.)
2D matrices with no 3D do not meet the requirements for most complex applications. (At least there are inversions though). Even in 2D applications, usually the world ends up being, de facto, a 3D one (they are "2D with layers", so that the world does have z coordinates). Those applications may only need 2D rendering, but they do use (limited) 3D linear algebra.
I also think there probably should be oriented bounding boxes, even for 2D. "Simple" APIs that only provide axis aligned bounding boxes, which yes, is almost all of them, cause beginner programmers tons of headaches as they have to figure out linear algebra to do such things as "tell me if the closest box around this rotated object intersects the closest box around this other rotated object".
The emphasis on vector graphics seem strange. Even in platforms like Android, we haven't seen vector graphics come to the forefront, instead, the approach used for UI graphics scaling is providing raster graphics at different resolutions. Vector graphics are not, in fact, more efficient for the majority of applications, and providing essentially zero read/write capability for common vector formats doesn't help.
It seems to be intended to be a teaching API. Undergrad courses don't use C++ as much as they did when, well, I was in undergrad, but those that do teach graphics with OpenGL (not even Cairo or Qt). Why? Well, 2D-only non-hardware accelerated graphics are not, in fact, hugely useful or relevant to the vast majority of C++ programmers. Though I am primarily a C++ programmer, I write UIs in C# or Java (yes, I know). The proposed spec, anyway, does not even have a "create window" API, making it pretty bad as a basis for a UI toolkit (presumably you still have to obtain the window handle through platform-dependent means). The reason I don't write charts in C++ is not actually the lack of 2d APIs, Qt is accessible enough, it's the lack of good, easy data input facilities compared to languages like Python (or well, JavaScript...). When reading XML or CSV requires a library no one is going to use your language for charting.
And as a last point that I admit is more of an aesthetic issue, what is up with preceding all data types with "basic_"? They're basic in Qt and HTML5 Canvas but none of those actually call it a "basic_circle" anyway. This is going to turn autocomplete into a terrible pain as well as add five characters to every line for pretty much no reason. Just call it circle, we won't mind.
With all these new languages the intimidated, middle aged c++ language is having a mid-life crisis and is getting tattoos. Knock it off, act your age and enjoy the ride into the sunset with your dad, c.
Perhaps people thought if a GFX API were incorporated into the C++ standard it would replace all the others, but... Can anyone imagine that actually happening? Really?
I'd be happy if languages would adopt Vec2 Vec3 and Vec4 as standard data types - both float and double. I understand there are ways to create these in most any language, I just want them standardized so we can all write code the same way and get the performance of some vector instructions without using intrinsics.
If they can't even standardize the obvious (fixed length) vector types used in graphics (and other things) they really shouldn't try to define a graphics API.