I agree with your statements regarding reconfiguration and TDM, though I still think GP (and to a lesser extent, your comment) are very focused on traditional computing paradigms. FPGAs are much more promising for real-time systems, particularly those with very large incoming datasets to transform or otherwise process in parallel. Thinking about FPGAs in terms of how 'quickly' they process data is really missing the point IMO.
One common, and very good application for FPGAs is for use in Active Electronically Scanned Array radar, sonar, or camera image processing. You can perform parallel filtering and transforms with various frequency and phase settings, which would be impossible for a similarly-sized processor to do.
FPGAs have the potential to revolutionize sensor arrays, by making them much more useful and affordable.
I agree yes. "Traditional computing paradigms" are (IMO) not all that interesting as research topics at this point. As far as I know, most of the work in that space is in branch prediction and cache replacement policies.
FPGAs are what you really want when you need to deal with high resolution data that is coming in at very high data rates. Often even a very fast general-purpose processor with hand-tuned assembly simply won't have even the theoretical memory throughput to process your data without "dropping frames". They also have the benefit of deterministic performance, which with modern caching/branch prediction systems you can't guarantee (AFAIK, my computer architecture knowledge isn't that cutting edge).
They can also work really well if you have some computation you want to do that is so far off the beaten path for general-purpose processors (or so memory bound) that FPGAs can take the cake.
There is also some work in sprinkling even more hardlogic into the FPGA dies, like processors or accelerator cores for various applications. FPGAs are great for implementing the glue logic to move data between those.
I think you touched on one of the biggest things about FPGAs in your comment, which is that they are perfect for computation that does not involve branches. If you've got a lot of data, and you're doing transforms, you usually don't need to branch, so being able to crunch everything through in parallel is a massive benefit.
Also agree that additional hard logic or peripherals will be a game-changer for FPGAs, though they would make each design more domain-specific. Alternatively, we may see a shift in how the interconnects are done, which allows for flexible use of these 'modules'. It's also possible that we'll see continual increases in LE counts which make more specialized hardware unnecessary. I don't know which way things will go.
One common, and very good application for FPGAs is for use in Active Electronically Scanned Array radar, sonar, or camera image processing. You can perform parallel filtering and transforms with various frequency and phase settings, which would be impossible for a similarly-sized processor to do.
FPGAs have the potential to revolutionize sensor arrays, by making them much more useful and affordable.