In my undergrad ecology class, we were assigned a scent-based laboratory experiment which involved all the male and female members of the class wearing the same white t-shirt for 48 hours in order to accrue bodily odors-- heavy workouts, perfume, showering, etc were all prohibited. At the end, everyone in the class sniffed all of the t-shirts and guessed which t shirt had been worn by what sex.
Most people had a 50-50 chance, and a few guessed slightly better than chance. I was far worse than chance, guessing only 10% correctly. In a way, anti-forecasting is a type of forecasting, too. If someone had asked me for my guess and then guessed the opposite, they'd have had superior guessing power. This paradigm falls apart quickly when the chances of events aren't so clear, however.
This is a common factor which is considered when researching approaches to crowd sourcing. The article at [0] shows how their approach deals with anti-forcasters ("adversarial answers") by doing exactly as you say - inverting their answer and calling them good.
That's an excellent question I'd like to know the answer to. I've personally known lots of people who were ideologically predisposed towards certain biases who would consistently predict outcomes of political events incorrectly.
e.g. "if <insert presidential candidate> is elected, that's gonna be the end of America" with similarly failed predictions over the next four years when that candidate is elected.