Thwarting API

A new tool is available to check the persistent harassment of online trolls. Google’s Jigsaw think tank last week launched Perspective, an early stage technology that uses machine learning to help neutralize trolls.

Perspective reviews comments and scores them based on their similarity to comments people have labeled as toxic, or that are likely to result in someone leaving a conversation.

Publishers can select what they want to do with the information Perspective provides to them. Their options include the following:

  • Flagging comments for their own moderators to review;
  • Providing tools to help users understand the potential toxicity of comments as they write them; and
  • Letting readers sort comments based on their likely toxicity.

Forty-seven percent of 3,000 Americans aged 15 or older reported experiencing online harassment or abuse, according to a survey Data & Society conducted last year. More 70 percent said they had witnessed online harassment or abuse.

Perspective got its training through an examination of hundreds of thousands of comments labeled by human reviewers who were asked to rate online comments on a scale from “very toxic” to “very healthy.”

Like all machine learning applications, Perspective improves as it’s used.

Partners and Future Plans

A number of partners have signed on to work with Jigsaw in this endeavor:

  • The Wikimedia Foundation is researching ways to detect personal attacks against volunteer editors on Wikipedia;
  • The New York Times is building an open source moderation tool to expand community discussion
  • The Economist is reworking its comments platform; and
  • The Guardian is researching how best to moderate comment forums, and host online discussions between readers and journalists.

Jigsaw has been testing a version of this technology with The New York Times, which has a team sifting through and moderating 11 thousand comments daily before they are posted.

Jigsaw is working to train models that let moderators sort through comments more quickly.

The company is looking for more partners. It wants to deliver models that work in languages other than English, as well as models that can identify other characteristics, such as when comments are unsubstantial or off-topic.