I suspect this is the case as well. It's weirdly controversial to suggest that a large number of accounts on any platform are "shill" accounts, despite that the fact there are clear incentives for these to exist and it's not particularly hard to make one. Even today with social networks being quite large, a few votes in your target direction can majorly swing the ultimate outcome: on reddit just 5-10 downvotes will get the collective to see the comment is "bad" and continue downvoting, on HN you just need a handful of people to flag a post and it will disappear from the front page (similarly you only need a few upvotes that HN doesn't detect as ring to get to the home page).
Any puppet account you recognize is one that has already failed. Just like with websites, time is a major factor in detecting these type of accounts. So if you can get away with posting relatively innocuous content in an automated way for a few years you can build "trust" on these various platforms.
Any puppet account you recognize is one that has already failed. Just like with websites, time is a major factor in detecting these type of accounts. So if you can get away with posting relatively innocuous content in an automated way for a few years you can build "trust" on these various platforms.