Not really, the Itanium VLIW architecture bet heavily on instruction-level parallellism as opposed to thread-level parallellism. In theory, the Itanium could issue and retire 3 instructions per cycle thereby making it competitive with x86 even on modest clock speeds.
The main problem was that not many programs could sustain 3 parallel instructions in their critical path, which meant that the compiler would often generate NOPs to fill the empty instruction slots. IIRC the Itanium typically achieved around 40% of its theoretical performance on conventional workloads. The term "NOP density" was coined specifically to research this problem.
There is another interesting observation in [1] that I haven't realized before: even if the compiler were to succesfully generate 3 instructions per cycle, the processor then had to possibly fetch 3 memory locations in that instruction cycle. If two of those were already in cache, the instruction would still stall on the third memory fetch. Contrast this with the implicit parallellism of hyperthreading, where the processor can continue executing a different thread when the current thread encounters a memory stall.
If Intel decided they would only produce Itaniums, without an AMD around to come up with the idea to create AMD64, we wouldn't have any option than to live with those shortcommings and eventually get improved designs.
If the choice had been Itanium or bust, it would have turned out much different outcome.