The only thing I can recommend off hand is to try and simplify your geometries a bit for searching if they’re complex. Searching along the border of a complicated shape is much harder than searching around a square, it has to do a lot more calculations.
Yes had the same solution ~15 years ago when location based search got popular. Searching for anything within some distance to a defined location with perfect earth projection and a circle radius „did not scale“. Ignoring the earth‘s projection and pretending a flat earth with a rectangle search was much much faster.
In the end we used this simplified calculation with added „regions“. So the world was split into 15km*15km squares, so any square being more than x apart could never be in the result set. This could maybe be used with modern postgresql partitioning and partition elimination in a clever way.
And without partitioning maybe clever z-ordering the entries physically in the database (clustering) could reduce a lot of random i/o.
MBR is a common optimization shortcut but I have to wonder if PostGIS doesn't already do that as a first pass filter anyway. It's an incredibly smart and performant extension.
Sometimes the issue can be tweaked by using intersection versus overlap/contains/contained when possible.