Moderation of comments: And yet, can we trust users?

What is this story about? The fact that someone constantly needs to protect us from extremes in online discussions. About what is being done by the organizers of the discussions in order to allow us to communicate at least somehow acceptable, and how they are unable to cope with this task. About what we lack in such communication and what new can be offered in the field of moderation.

So, on stage under the hot light of spotlights - moderation of online discussions, or rather comments.




What is the problem, what are we fighting?


A discussion on a topic of interest to you is so cool! An interesting interlocutor on the other side of the continent - is this not the dream of our ancestors! However, in online discussions there is a fly in the ointment: constantly someone interferes.

This problem is faced by novice bloggers, who first plunged into the wonderful online world and opened the treasured dialogue of commenting on the whole world on their blog. Internet giants are struggling with it, investing millions of dollars annually in this task and hiring hundreds of thousands of additional moderators during a surge of user activity discussing top news, disasters, pandemics, etc.

Once such a problem was explicit spam, and the biggest pests were Internet bots that were programmed to pose as users and spam for the benefit of their hosts on commenting dialogs. With this type of interference in the discussions, the organizers, albeit with varying degrees of success, but learned to fight. Learned to discourage and confuse them. Thanks to the captcha services, automatic filters, neural networks and Akismet, what’s with us that protects us from obvious spam!

On the machine - there is another machine.


But, it turned out that the organizers still can not breathe out. Having protected the discussion from mass mailings of Internet bots, it also needs to be protected from people who do not follow the rules of online discussion. And then it turned out that automatic methods could not be dispensed with, because they are weak.

Because now they are still at the wrong point in development. Because they still very poorly understand the meaning of human dialogues. Because, as before, at the time of Karel Chapek, the best minds of mankind are still struggling to create a strong AI that can really understand us. Because a user who violates the rules, if desired, quietly bypasses filters, neural networks, and other pseudo-intelligent machines.



So what do we have now?


As a result, the organization of the discussion, as before, requires the presence of a moderator person. Yes, of course, programs are getting smarter and can reduce a lot of checks. But they are wrong, moreover, and often, but aptly. As a result, the moderator, as before, remains with the final decision: does this or that comment violate the rules, does it go beyond the discussion, does the comment automatically cheat. Well, where do we get the right number of moderators for all the dialogs?

Now site owners are faced with a situation that, in some cases, it’s better to turn off comments altogether.


It can be easier for a blogger who wants to focus on writing articles (or maybe something more pleasant) and who does not have the ability (and does not want to - why should he?) To be distracted by moderating discussions. It can be simpler and more economically feasible for large Internet companies. This, for example, was done by YouTube video hosting in March 2019, which introduced new rules, after which tens of millions of videos lost the “Leave a Comment” button. And all because advertisers are not happy with the comments, often of a sexual nature and refuse to advertise in such an information environment. Here hi to automatic moderation! Well, they can’t!

But what about the debating users themselves? Can they help?


It would seem yes. Most users are interested in an adequate discussion. In the end, there are more than adequate people than inadequate people, and they could somehow participate in the moderation process, somehow evaluate comments and each other independently. Very tempting, isn't it? It seems that the moderator is not particularly needed then. As in ordinary life, when the adequacy of discussions is under our control and if someone interferes with the discussion, the community simply excludes it. In some cases, even with fists.

On the Internet, everything is not so, because everyone is equal, because there is no trust in users. Default. To give the opportunity to directly moderate other people's comments to a simple user is crazy, an Internet hara-kiri for the organizer.

Nevertheless, attempts to transfer at least part of the discussion management to users continue. It is very tempting.


Here, for example, you can call a moderator in case of violation. Do you remember? The coveted “complain” button. Class! Only here is something lazy for our blogger, he is not in a hurry especially. Anyway, maybe you pressed it jokingly?

Or, for example, if many users indicate a violation, then, perhaps, you can trust such a collective moderation and automatically hide the violation. But there are difficulties. But what if there are not enough users to organize a collective vote? And actually, somehow there is a suspicion that such a vote can be cheated from different browsers.

Or you can try to choose the most objective users and give them moderation functions. As, for example, on Habré. But there should be a large community with its own history, on the basis of which objective users can be selected. And this is not always the case.

So, if we are on the zen path of attracting users to an adequate discussion, then we obviously come to the question:

How to learn to trust the user?




It would seem impossible to just take and trust the assessment of an unknown user. But you can create the conditions under which this assessment can be trusted! And such a successful experience on the Internet already exists! This method is used by the well-known and pretty tired of ReCapcha . More precisely, one of its first versions, in which the user was asked to write a few words for authorization on the site. One of these words was test and known to the system, and the other word was unknown to the system, and it was required to recognize it. The users themselves did not know which word was known and which was not, and in order to log in faster, they had to answer honestly, not guess!

As a result, users who answered honestly and correctly entered the verification word were considered objective, and ReCapcha passed them on and also accepted them! a previously unknown word to her as a true result.

And it turned out to be effective! In this simple way, ReCapcha users in the first six months of work successfully recognized about 150 million unrecognized words, which is equivalent to 7,500 books that could not recognize automatic methods. Not bad, right?

But can we apply such a method in the field of moderation?


Why not? Nothing prevents giving the user the opportunity to evaluate for compliance with the rules a number of comments, among which there will be one whose rating we do not know, and the rest will be test ones. Next, accept the rating of the unknown comment if the user is honest in evaluating the test (i.e. known to the system) comments. Moreover, to evaluate the unknown comment, in this way, we will not need to organize a collective vote, it will be enough for one user who will prove his adequacy!

And then even more interesting. This seemingly simple method of a single objective assessment when it is replicated can give a powerful moderation effect, so to speak, independent user moderation:

  1. - :
    , 1 2. 2 3. . , .
  2. - .
    , , . . .
  3. - :
    “?”. . , , . , - .
  4. :
    , , - , . , , - - , . - , . , .


As it was sung in a satirical soldier’s song:
Purely inscribed in papers,
Yes, they forgot about the ravines,
And walk on them!
And how will it be in life?



For clarity, we give an example of post-moderation based on this method:

1) Suppose we have the following discussion. Any reader who encounters a violation can invoke verification by clicking the “violates?”



2) He will be offered the rules of user moderation and asked to rate a few others' comments before sending the comment chosen by him to the moderation. By clicking the “Rate >>” button, he will be able to proceed with the assessment.



3) The following is an example of one of 4 ratings (there may be a different number of ratings) that will be offered to him. The answer is yes or no.



4) If the user is objective and correctly evaluates all the comments (in this example there will be 3 test comments and one unknown that needs to be checked), then he is considered objective and he will be allowed to send his chosen comment for moderation.



5) If the user was mistaken in the verification parts, then he is considered biased, and he will not be able to send a comment on moderation.



The same can be arranged for pre-moderation. In general, based on this method, the following commenting culture is possible:

  • Unauthorized users on the site always comment with user pre-moderation.
  • Authorized users, if they do not violate, can comment without pre-moderation.
  • Readers can troubleshoot using custom post-moderation.




Just like that, we can attract users to the solution of the moderation task if we begin to trust them. Thus, we can give users the opportunity to maintain a culture of communication on their own, without bothering them at all, and organize an environment in which it would be unprofitable for commenters to violate the rules of discussion.


Thank! And let the criticism begin!

Addendum: A demo was created based on this article.
You can try it here .

All Articles