"The first shader-capable GPUs only supported pixel shading, but vertex shaders were quickly introduced once developers realized the power of shaders"
I actually very much disagree with your point about names. I write shaders when I have to, I'm not very good at it, but it feels fundamentally very broken, much as I picture the days of 16 bit x86 assembly (memory segmentation?).
There are arbitrary length, register etc limits per DX version, there's no clean cross platform method of writing shaders, the documentation is fragmented, vague, for the wrong platform or non existent.
Sure, the naming could be better, but page one of a decent textbook should set you straight. The other issues, not so much. Of course, that's more or less the price you pay for being on the bleeding edge of performance.
Oh! You're right, I was mixed up about the history. Sadly my mixup will probably detract from my other, non-history points as well. Thank you for correcting me.
I write shaders when I have to, I'm not very good at it, but it feels fundamentally very broken, much as I picture the days of 16 bit x86 assembly (memory segmentation?).
There are arbitrary length, register etc limits per DX version, there's no clean cross platform method of writing shaders, the documentation is fragmented, vague, for the wrong platform or non existent.
This is exactly why I push people to try using OpenGL and avoid Direct3D. All of those problems are D3D problems, not shader problems.
GL has no arbitrary length limits, and it has extremely accurate and thorough documentation. If a program is too complex to execute properly on the given hardware, then it's executed properly in software. Some see that as a terrible thing, and sometimes it is, but in today's mega-GPU world it's becoming increasingly rare to write shader programs that are so complicated that they have to be emulated in software by the driver. Getting an accurate result seems much better.
The limits are hardware limits. DX version is common shorthand for rough hardware generation. If you develop against the same shader model versions in GLSL, you'll have exactly the same limits.
Each video card has a different limit. If you develop against GLSL, you have the limit of whatever videocard you're using, which is substantially different from what DX would have you believe its limit is. GPUs are capable of more than what Microsoft would have you believe they're capable of.
The limits that you're referring to are artificial, because Microsoft mandates that if a pixel shader has more than N instructions then it shouldn't be allowed to compile, regardless of what the videocard is actually capable of doing. It's confusing, and the reason they decided to do it that way was for compatibility across a wide variety of hardware.
"The first shader-capable GPUs only supported pixel shading, but vertex shaders were quickly introduced once developers realized the power of shaders"
I actually very much disagree with your point about names. I write shaders when I have to, I'm not very good at it, but it feels fundamentally very broken, much as I picture the days of 16 bit x86 assembly (memory segmentation?).
There are arbitrary length, register etc limits per DX version, there's no clean cross platform method of writing shaders, the documentation is fragmented, vague, for the wrong platform or non existent.
Sure, the naming could be better, but page one of a decent textbook should set you straight. The other issues, not so much. Of course, that's more or less the price you pay for being on the bleeding edge of performance.
http://en.wikipedia.org/wiki/Shader#History