They also say "Because of the relatively small sample size, we did not make corrections for multiple comparisons.".
I don't think that means this should be dismissed entirely because you accomplish what you can with the data available to you. It's extremely hard to conduct studies of specialized populations and if we waited for the perfect conditions nothing would get done. We should just be careful not to make any strong opinions based on this study alone.
> They also say "Because of the relatively small sample size, we did not make corrections for multiple comparisons.".
Do you think this is somehow a point in their defense? Corrections for multiple comparisons reduce the significance numbers of your results; probably they found that if they corrected for multiple comparisons their p-value was above 0.05 and they wouldn't be able to publish their paper, so they undid it. (Alternatively, they were lazy, so they skipped it).
I was just adding onto why we should be skeptical. I'm assuming this was published because it's hard to find data like this, not because the results were spectacular
https://onlinelibrary.wiley.com/doi/full/10.1002/aur.3012
I'll save you the read, in light of the replication crisis, this is the important bit: 33 autistic participants