Blocking users would be great- five days here and I am already experiencing every thread being littered with some 13y/o neo-nazi racist. Obviously, it degrades the whole site and drives people off. But even better would be to turn each troll list into an object users can share with each other.
So the UX for this would be you click on someone's name, you get info about them, one thing you see is "adopt blocked user list" and by clicking on it their blocked list is your blocked list (excluding yourself of course, in case they blocked you).
Here are this ideas virtues:
If someone has already done the work of blocking the trolls, so why make everyone do it? If you friend someone, it might also be the case you want to block the same types of trolls.
You could go one more level meta and offer new users the chance to block the people who most often show up on troll lists of users. So this way they never have to experience trolls at all. That would raise the perceived level of discourse for new users.
Some foreseeable issues:
This is predicated on the idea that there are enough good users who would come to some kind of high quality convergence of who a troll is. There's room for algorithm experimentation here.
Brigading a user into being labeled a troll. So suppose all nazi-types hate me, create 1000 fake accounts per user, write a script to put me on their banned lists. Am I now a troll ?
This seems like a special case of the fake user account problem and everything that follows from that. However fake account or too many accounts are dealt with, in addition you could maintain a metric of who on the site is clearly a high quality person and who is clearly a troll and do some FOAF type analysis to sniff out brigading behavior.
For example, if I have added someone as a friend, the chances of them also being a troll are pretty much nil, so they shouldn't be on any "new user community troll list", even if they're hated by a lot of nazis and banned by them as trolls.
Lots of things like that where you can get a lot of leverage using known high quality entitity's first hand knowledge of other people to prevent abuse.
Of course users can always see their own troll list and prune it . That alone would discourage abuse, since there's an effective counter, making the abuse ineffective.
Other features would be an option to see banned user's names in a thread. So the UX there is "show trolls" button on threads and pushing it has the thread rendered with just their names- but not their comments. Then, based on the name, next to their post, you can "show user's comments for thread" . That's a little wordy but the driving use case may clarify it:
If everyone is responding to an intelligent post, but for some reason the post is by someone who got on your troll list, you could elect to see that presumptive troll's post and decide that their presence on your troll list was an error, and remove them. Here again, you're countering people having been brigaded into troll status and thereby discouraging people from trying to do that in the first place..
Lost of room for thoughful algos and novel problem solving, which is fun for devs and an opportunity to make the user's first experience of the site higher quality.
[–]fred_red_beans 3 insightful - 2 fun3 insightful - 1 fun4 insightful - 1 fun4 insightful - 2 fun - (2 children)
[–]JasonCarswell 1 insightful - 2 fun1 insightful - 1 fun2 insightful - 1 fun2 insightful - 2 fun - (1 child)
[–]JasonCarswell 3 insightful - 2 fun3 insightful - 1 fun4 insightful - 1 fun4 insightful - 2 fun - (0 children)
[–]Drewski 2 insightful - 2 fun2 insightful - 1 fun3 insightful - 1 fun3 insightful - 2 fun - (0 children)