If a text based submission gets above some threshold (points, user
history, replies, ...), parse the links in the submission. If
the submission gets any flags, then stop parsing the links.
Also, rendering text normally (black) rather than grayed-out when a text
based submission gets above some threshold would be beneficial.
Sure, parsing links on text based submissions will allow manipulative
people to put their comment in a privileged position, but the current
implementation detracts from discussion. For example, it's typical to
see a follow-up reply for the sole purpose of having clickable links,
but due to reply ranking, one usually finds the reply too late. Another
issue is mobile devices where a copy-and-paste of a plain text URL is
painful.
Obviously, I don't know the stats necessary to grasp whether or not
you're fighting with flag-bots or even users who flag too aggressively,
but I know you have code running to deal with these two issues (you
publicly mentioned how the weighting works eons ago). Since you have the
capacity to roughly determine good flagging from bad, a single good flag
should be enough to reverse the decision to parse links. This should be
enough stop the "privileged positioning" problem.
As for combating the "link harvesting/spamming" side of problem, I think
the most you could do is mark the parsed links as "nofollow" in the text
based submissions (as usual). It's not a perfect solution, but it's
still better than nothing, and it's equivalent to how you handle link
based submissions.
Also, rendering text normally (black) rather than grayed-out when a text based submission gets above some threshold would be beneficial.
Sure, parsing links on text based submissions will allow manipulative people to put their comment in a privileged position, but the current implementation detracts from discussion. For example, it's typical to see a follow-up reply for the sole purpose of having clickable links, but due to reply ranking, one usually finds the reply too late. Another issue is mobile devices where a copy-and-paste of a plain text URL is painful.
Obviously, I don't know the stats necessary to grasp whether or not you're fighting with flag-bots or even users who flag too aggressively, but I know you have code running to deal with these two issues (you publicly mentioned how the weighting works eons ago). Since you have the capacity to roughly determine good flagging from bad, a single good flag should be enough to reverse the decision to parse links. This should be enough stop the "privileged positioning" problem.
As for combating the "link harvesting/spamming" side of problem, I think the most you could do is mark the parsed links as "nofollow" in the text based submissions (as usual). It's not a perfect solution, but it's still better than nothing, and it's equivalent to how you handle link based submissions.