That commoditizes the FPGA market. Which is great for consumers but the exact thing the manufacturers hate, because suddenly they have to compete on price because it's easier to switch tooling.
If Intel thought they could get away with requiring a licensed compiler for their processors, they would. GPUs are in the no-mans-land: somewhat portable shader languages, but the compiler targets are proprietary and rapidly shifting.
It's pretty obvious the manufacturers will try and get away with vendor lock-in for as long as they can.
NVidia has been playing that game with Cuda from day one, leaving OpenCL as a barely supported, barely usable, and complete no-no for anyone who wants to do production work with their hardware so they can avoid this very criticism.
The manufacturers are the villain in that story, and they know it.
This makes the kind of project described in the OP all the more important, and the folks who work on it should get help form anyone who uses FPGA's commercially.
I work with FPGAs commercially. Using non- supported tools isn't an option.
If you're building anything 'real', you're not going to try to save a few thousand dollars and risk the program, you just cost it in.
Yeah, I know, Synplify costs way more than that, it isn't exactly standard anymore (Xilinx' tools got good enough).
It's really not that expensive, considering the FPGA is usually critical system architecture.
I don't see Nvidia as the 'villain' in that story either - they designed the graphics chips, they designed Cuda. Nobody is forcing anyone to use either. They don't support the language you like? Use somebody else's chips. They can't support every language - I'm not upset they don't directly support Object Pascal.
If Intel thought they could get away with requiring a licensed compiler for their processors, they would. GPUs are in the no-mans-land: somewhat portable shader languages, but the compiler targets are proprietary and rapidly shifting.