Here’s how some of the biggest social platforms are updating their policies ahead of the US midterm elections on 8 November.
The United States is headed for its midterm elections in less than three months and social media giants such as Facebook, Twitter and TikTok are gearing up for the drama.
With much of the public discussion around elections world over happening online these days, social media platforms have a unique responsibility among tech companies to create a level playing field and ensure no foul play is involved.
However, they have also been subject to controversy and criticism for alleged policy failures in the run-up to elections and the resultant influence they have on voting patterns, such as allowing misinformation, misleading ad campaigns and external influences on their platforms.
As voters in the US head in for its first major election in two years on 8 November, here are some ways in which social media companies are bracing themselves, or not.
Meta
Undoubtedly the company that attracts most attention ahead of elections, Facebook-parent Meta’s stance this year can best be described as a throwback to the last election in 2020.
The company is going to focus its attention towards cracking down on misinformation around voting logistics and disallowing new political ads in the week leading up to Election Day.
Nick Clegg, Meta’s president of global affairs, said that the company aims to be “consistent with the policies and safeguards we had in place during the 2020 US presidential election”, when Facebook first decided to ban political ads temporarily before elections.
“This includes advanced security operations to fight foreign interference and domestic influence campaigns, our network of independent fact-checking partners, our industry-leading transparency measures around political advertising and pages, as well as new measures to help keep poll workers safe.”
The company has employed “hundreds of people across more than 40 teams” working towards preparing Meta social apps Facebook and Instagram ahead of the midterms. Clegg said Meta spent around $5bn globally on safety and security last year alone.
Meta will also remove any posts that mislead people on where, when and how to vote, or those that call for violence based on the voting or election outcome – as was the case in the aftermath of Donald Trump’s loss in the last election.
Clegg said the company is working with 10 external fact checking partners, including five Spanish-language organisations, to review posts and label them if they’re misleading.
TikTok
Comparatively new to US elections when compared to Facebook and Twitter, TikTok’s future in the US was temporarily under threat after US president Donald Trump signed a now-revoked executive order against the app in 2020 under the pretext of national security.
This time around, TikTok has a dedicated in-app Elections Centre that went live yesterday (17 August) to give the platform’s approximately 80m users in the US access to state-specific information on the elections, such as how to register and vote as well as the location of polling stations.
Much like Meta, TikTok is all focusing significant attention in trying to curb the spread of misinformation on the platform, building on lessons from 2020. This includes misinformation around how to vote, harassment of election workers, deep fakes of political candidates and violence.
Unlike Meta, TikTok said it will ban all political ads on its platform, including both paid ads and sponsored and branded content from creators. In contrast, Meta not only allows political ads on its platforms, it also controversially does not fact check them.
Using a mix of automated technology and human intervention, TikTok will also filter videos to ensure no conspiracy theories get highlighted ahead of the elections.
Arguably the most politically focused of all major social media platforms, Twitter announced last week that it will apply its civil integrity policy introduced in 2018 to label or remove tweets with misleading content that can have a harmful effect on the elections.
According to the policy, misleading content includes claims about how to vote, content intended to intimidate or dissuade people from voting, and claims intended to undermine public confidence in an election – including false information about the outcome of the election.
“Tweets with this content may be labelled with links to credible information or helpful context, and Twitter will not recommend or amplify this content in areas of the product where Twitter makes recommendations,” the company wrote in a statement.
As Twitter battles Elon Musk over his takeover of the company, the company said it will keep an eye out for “fake accounts that misrepresent affiliation to a candidate or elected official” as they are prohibited under existing policy. It will also ensure the safety of election workers online.
Twitter’s election prep also includes a dedicated explore tab that will include national news in both English and Spanish by ‘reputable’ news sources, localised news by state, and voter education public service announcements using info from non-partisan organisations.
“Twitter plays a critical role in empowering democratic conversations, facilitating meaningful political debate, and providing information on civic participation – not only in the US, but around the world,” the company said.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.
Source by www.siliconrepublic.com