Why Singapore and why now? The story behind Facebook’s new Election Monitoring Centre

With elections set to take place around ASEAN this year, Facebook is arming itself for a war on misinformation and fake news. Singapore will become Facebook’s frontline in this upcoming crusade.

By Preetam Kaushik

In a recent blog post, Facebook outlined a decision to set up two new regional centres focused exclusively on monitoring content related to upcoming regional elections. Singapore was one of the chosen cities, along with Dublin, the Irish capital.

Facebook beefing up its cyber army to monitor election-related activity is should come as no surprise. The company has been stung badly for its lethargy in dealing with fake news and targeted ads during elections, most notoriously during the 2016 US Elections.

Governments across the world have increased scrutiny of Facebook over its handling of key issues like user privacy and fake news. Having had its fingers burnt, the company has been quick to take proactive steps to avoid further controversies in the upcoming 2019 elections across the world.

Strengthening its election monitoring teams is just one part of this new strategy. Facebook has already unveiled several other internal changes in its organisation and regulations regarding political messaging, including new transparency tools and a public database on all political ads

2019 is a major election year in both Asia & Europe

A quick look at the global election map for the year 2019 is enough to see why Facebook is focusing more on Europe and Asia this year. Both continents will see a lot of key national elections in major nations.

The ASEAN region will host several important national elections this year, with Thailand, the Philippines, and Indonesia all set to go to the polls. Having a monitoring centre in Singapore is crucial in Facebook’s plans to help keep its platform free from election meddling in the region.

Indonesia is among the countries heading to the polls in 2019.
Photo Credit: Department of Foreign Affairs and Trade/Flickr

Southeast Asia has nearly 400 million user accounts. The spectre of thousands of fake accounts spewing misinformation is all too real here. Hate speech and fake news have incited numerous instances of violence and ethnic tensions in the region. Facebook needs dedicated monitoring teams to identify the source accounts to have any chance of success on combating fake news and inflammatory hate speech, as underscored by recent events in Indonesia

Facebook has already expanded its offices in these cities

These recent announcements throw a different light on recent Facebook plans to quadruple the size of its international HQ in Dublin and its hiring of 5,000 additional staff. 

A similar story played out in Singapore as well, with the company opening a new, bigger office at the swanky Marina One complex. The new facility will allow the company to triple the size of its workforce in the city. Facebook has also invested close to S$1.4 billion (USD 1 billion).on a massive data centre in the city. The 2019 elections were undoubtedly one of the factors that influenced expansion plans in both cities last year.  

With these moves, Facebook has signalled its determination to combat the menace of fake news and misinformation during election cycles around the world. But will it be enough?

Why Facebook will need to do more

The sheer scale of the problem is the biggest challenge. In the Asia-Pacific region alone, Facebook has nearly a billion active user accounts to keep an eye on. Sophisticated algorithms and AI may be able to crunch through big data, but challenges still remain.

Linguistic and cultural divides are a big hurdle in this. The South East Asian region is very diverse, with numerous languages and local cultures spread across nearly a dozen different nations. Identifying malicious content in local languages and dialects is time-consuming. Facebook is already working with local third-party fact checking teams to overcome this.

Finding the source of fake news does not stop the threat altogether in an era where sharing content is the norm. Political parties and candidates across the world hire armies of online trolls and bots to spread misinformation on social media. Fighting this threat is like trying to kill a Hydra: when you cut off one head, it simply sprouts a few more elsewhere.

To make matters even more complicated, social media platforms like Facebook also have to contend with governments using the same misinformation tactics, often in the name of state propaganda. 

The latest measures show that Facebook is taking its responsibilities seriously. It is a welcome step in the right direction. But the company will need to invest more money and resources into localization efforts to keep up, especially in diverse regions of Asia-Pacific.

Increased funding into AI research alone will not help fight fake news, as recent experiences have shown. AI can identify suspicious behaviour but is not able to verify the content itself. It can’t tell the difference between an inflammatory article designed to spread anger, or a satirical article designed to amuse. For that, there is currently no substitute for boots on the ground in Facebook’s war room.

Even then, tech companies need allies in this fight. After all, technology is only a part of the equation. The users of these new technologies, namely governments, citizens and other organizations all have a role to play. They all have to come together to tackle the issue with any chance of success.

About the Author

Preetam Kaushik
Preetam Kaushik is a Mumbai-based journalist covering business, tech and economy. A former freelance Mumbai correspondent for Business Insider India and freelance journalist for TheStreet.com, his work has been published by The Times of India, The Huffington Post, Economic Times, WIRED, and World Economic Forum.