> arguably the weirdest mitigation strategy that I've ever seen for a problem like this -- to literally send the URL into another opaque system that just spits out "safe" or "dangerous" with no indication to the user why that is.
I can imagine the AI thinking that the top left most bit of `d` is sharp and pointy and therefore not safe, but `a` has no significant protrusion and therefore is safe.
I can imagine the AI thinking that the top left most bit of `d` is sharp and pointy and therefore not safe, but `a` has no significant protrusion and therefore is safe.