Have you investigated why? I know that many projects have an "implement first, optimize later" approach, and the lesser used functions might be far from optimal.
Back in the tensorflow days, I had this issue and submitted a patch that gave a ~50x speedup for my usecase. It's always better to optimize the base function rather than have 100 people all manually working around the same performance issue.
Because they use a funny format (BCOO). I'm not mocking, it must be a solid choice for some reasons, like sparsification or other fancy stuff. But for large and even with batches (ie multiply with tall dense matrix), it doesn't match an equivalent scatter (x.at[idx].add(vals)). Which itself is several times slow than equivalent opencl (on an A40)