Nick Pickles, public policy director for Twitter speaks during a full committee hearing on “Mass Violence, Extremism, and Digital Responsibility” on September 18, 2019 in Washington, DC.
Olivier Douliery | AFP | Getty Images
Autocrats who direct violence against their own citizens are welcome on Twitter as long as they follow the company’s rules, a Twitter executive told lawmakers at a hearing Wednesday.
“Would Twitter allow [Russian President Vladimir] Putin to have an account or [Chinese leader] Xi Jinping to have an account?” asked Sen. Dan Sullivan, R-Alaska, at a Senate Commerce Committee hearing about the responsibility of social media companies to monitor and remove propaganda and depictions of violence from their platforms.
“If they were acting within our rules,” responded Twitter’s director of public policy Nick Pickles.
Pickles testified alongside representatives from Facebook, Google and the Anti-Defamation League. The exchange with Sullivan followed questioning over Twitter’s decision to keep Venezuelan President Nicolas Maduro’s account online as the country experiences a humanitarian crisis after a controversial election.
Pickles confirmed to Sullivan that “the rules that apply to any user on Twitter are the same. ” Any user who encouraged violence on the platform would see action taken on their account — but only if that incitement of violence happened on Twitter.
“So if a government takes violence against it’s own citizens is that breaking the Twitter rules?” Sullivan asked.
Pickles said, “I think that activity is happening offline and the key question for us is what’s happening on Twitter.”
Twitter did not immediately respond to a request for comment.
Pickles said Twitter has taken action on government-directed propaganda campaigns, including one it found to be directed by the government of Venezuela. Twitter released an archive of accounts it believes engaged in the campaign.
At the hearing, lawmakers also asked about the importance of Section 230 of the Communications Decency Act, a controversial piece of legislation that protects social media companies from liability for their users’ content. Representatives from Twitter, Google and Facebook all fiercely defended the legislation, saying it helps them monitor dangerous content and encourages competition in the sector by protecting smaller players with fewer content moderation resources.
At one point in the hearing, a lawmaker wondered if there was enough at stake for tech companies to care about content moderation. Lawmakers often call Section 230 a legal shield that prevents tech companies from having to take responsibility for harmful content on their platforms. Facebook’s global policy head Monika Bickert later emphasized that “the incentives are there to make sure we are doing our part.”