fbpx

A service called Bot Sentinel that is being used to flag “troll” accounts, and accounts on Twitter that are “untrustworthy,” appears to be flagging right-wing Twitter accounts en masse – including that of President Donald J. Trump.

“We trained Bot Sentinel to identify specific types of trollbot accounts using thousands of accounts and millions of tweets for our machine learning model,” Bot Sentinel said on its website. “The system can correctly identify trollbot accounts with an accuracy of 95%. Unlike other machine learning tools designed to detect ‘bots,’ we are focusing on specific activities deemed inappropriate by Twitter rules.”

The site’s “About” page continues, explaining that the service’s machine learning uses “Twitter rules as a guide when selecting Twitter accounts to train our model” In other words, the accounts that are flagged, at least according to Bot Sentinel, are flagged based on Twitter’s terms of service.

Amid that backdrop, consider the following:

Trump’s account is listed as the most untrustworthy account on BotSentinel.com. It is the first account listed when one views the list of “untrustworthy” accounts on the Bot Sentinel site.

Also towards the top of that list are Ali Alexander, a conservative commentator, and Paul Sperry, a Hoover Institute media fellow and renowned author and journalist.

A bit further down the list (but not far) are Federalist reporter Mollie Hemingway and Mark Levin, an author, attorney, and radio show host. All of those “untrustworthy” accounts seem to have one thing in common: they all publicly renounced the “Russian collusion” hysteria from the beginning.

The others on the list are nearly all right-wing accounts. The further one scrolls, the more “untrustworthy” right-wingers one finds.

This raises an obvious question: If Bot Sentinel truly uses machine learning based on Twitter’s terms of service to tell its users which accounts are “untrustworthy,” does that mean that Twitter’s terms of service are rigged against conservatives?

Neither Bot Sentinel nor Twitter immediately responded to a request for comment.

Bot Sentinel also has a “Trollbot” page on its site, which labels accounts that its machine learning deems to be “trolls” (accounts which are apparently not run by humans, though the site does not define what a “trollbot” is) based on its algorithm, which, again, claims to be based on Twitter’s terms of service.

“We launched Bot Sentinel to help diminish the effectiveness of trollbot accounts that infest Twitter,” Bot Sentinel’s site said. “We believe Twitter users should be able to engage in healthy online discourse without foreign countries and organized groups manipulating the conversation.”

That description indicates that Bot Sentinel was formed in response to the “Russian collusion” investigation, during which Americans were told that some social media accounts were controlled by Russians to “meddle” in America’s elections. That effort, small as it was, also boosted Sen. Bernie Sanders (I-VT) who was a presidential candidate in 2016, and is again in 2020, Jill Stein of the Green Party, and the Black Lives Matter movement.

Yet Bot Sentinel has flagged mostly pro-Trump accounts as being non-human, or in its words, the product of “foreign countries and organized groups manipulating the conversation.”

The site’s “About” section ends with a disclaimer.

“Most trollbots are not part of a large conspiracy attempting to influence American policies and/or elections,” it says. “However, there are trollbots who engage in deceptive tactics and there is a correlation between troll-like behavior and trollbots who are part of an influence campaign. A trollbot that is actively trying to cause division and discord will behave in a manner consistent with someone who receives a high trollbot score.”

Bot Sentinel itself also has a Twitter account with nearly 12,000 followers. It chronicles accounts that have been suspended or are now inactive.

Based on Bot Sentinel’s description of its own service, either Twitter’s terms of service appear to flag conservatives at a much higher rate than they flag liberals, or Bot Sentinel isn’t being truthful about how its machine learning works.

The thing is, liberals often cite Bot Sentinel as an authority on which accounts are real, and which aren’t. Here are but two of thousands of examples. In the first, a Twitter user warns Rep. Eric Swalwell (D-CA) that he’s arguing with a “trollbot.”

If Bot Sentinel is inaccurate, it is actively propagandizing, and affecting political discourse in the very manner is claims to fight. Alternatively, Twitter might very well be flagging conservative accounts for terms of service violations at a high rate.

This story is developing.