There's a strong incentive to get a high citation count for your papers. This encourages behavior like the manipulations we're seeing here, on the part of fraudsters. But it also fails to incentivize caution on the part of researchers who cite existing works. If there were a "bamboozled count" that showed how many times a researcher cited a work that was later retracted, that would incentivize people to be a little more cautious, and perhaps avoid citing work of people who are suspected to be fudging the numbers.
While we're at it, let's also add a "rickrolled counter" for people who open the link and close it in less than 10 seconda. That should incentivize people to be a little more cautious, and perhaps avoid clicking on links without first doublechecking the URL.
Making the system more elaborate would also make it more complex, and people would just find more complex ways to game it. it might even make things worse.
I am sure most citations are not by people who suspected the numbers were fudged.
If anything, the way to go is to stop using flawed metrics. Most people want to do a good job and build a good reputation. Incentives just distort this.