Jump to content

Grants talk:IdeaLab/Controversy Monitoring Engine

From Meta, a Wikimedia project coordination wiki
This is an archived version of this page, as edited by FaFlo (talk | contribs) at 19:43, 11 April 2015 (→‎pointers and comments). It may differ significantly from the current version.

Latest comment: 9 years ago by Fabian Flöck in topic pointers and comments

More, please.

Hi Radfordj, I love this idea, and I'd love to know more! How will determine "words"? Also, you may want to take a look at https://meta.wikimedia.org/wiki/Grants:IdeaLab/Gender-gap_admin_training and consider how you might collaborate/support. --Mssemantics (talk) 21:01, 18 March 2015 (UTC)Reply

@Mssemantics: I think the words would probably come from an initial hand curated list of the typical four-letter variety: the b-word, c-word, n-word, f-words, etc. (I don't say them here so this post never gets flagged :). But, once we're able to identify posts that contain intimidating language, an inductive list of words weighted by the number of times they appear in intimidating posts will probably be used. See these word clouds (language warning!) for nice examples of the words that occur around intimidating language.
As for admin training (which I love by the way), I think this project can fold into it in two ways. The first is by wrapping in some kind of first-response training. I don't know what admins will get into with this monitoring. The monitor might pick up flame wars that spread out over weeks or it might pick up wars that emerge and die out in a matter of minutes. I don't know. Maybe using this engine would help the training in getting admins to think about how gap-conscious interventions can happen in a live-controversy context. The other connection could be in training admins on how the controversy monitor works. There's an underlying theory of controversy that the algorithm will be built on and which admins can contribute to or maybe learn more about controversies based on how the algorithm works. I'm sure there are other synergies too. Did you have some in mind? --Radfordj (talk) 12:32, 19 March 2015 (UTC)Reply

Eligibility confirmed, Inspire Campaign

This Inspire Grant proposal is under review!

We've confirmed your proposal is eligible for the Inspire Campaign review. Please feel free to ask questions and make changes to this proposal as discussions continue during this community comments period.

The committee's formal review begins on ’’6 April 2015’’, and grants will be announced at the end of April. See the schedule for more details.

Questions? Contact us at grants(at)wikimedia.org.

pointers and comments

hi there, I really like the idea, I would just like to learn more about certain parts of it (and have some pointers for you).

some comments:

  • regarding visualizations you could take a look at:

two distinct approaches how to show conflict. Maybe it helps or is even reusable.

  • regarding your estimates, I would think that 40 h is too little time to develop and test a robust method to identify harmful controversy (as there is of course also good controversy apart from simple vandalism fighting) . Especially how you actually want to test this does not become quite clear in the proposal and I think it would be good to expand on that. I think that is an important point if you would want the tool to be widely used later on, as a) it can be quite subjective what an intimidating behavior might be and b) a low precision or recall might render the tool not effective enough for actual use by Wikipedians (esp. precision, i.e. raising false alarms).
  • another point relating to the last one: just showing how controversial an article is might not directly relate to how intimidating some editors are towards, e.g., women and newcomers. For a controversy to "heat up" to a degree where it produces a high conflict score, it requires at least two parties going back and forth on each other. Especially newcomers (and maybe women also) might be less inclined to even go into an escalation like this, so the big "wars" would actually not be the ones you would like to detect, but rather instances where user tried a couple of times to change something, but were unilaterally reverted (without fighting back very much themselves). Or maybe this is already included in your concept (I was not sure)?

just some thoughts.

--Fabian Flöck (talk) 19:43, 11 April 2015 (UTC)Reply