AI technologies will add a new, pernicious dimension to sexualised smear campaigns.
By Oliver Ward
Character assassination and smear campaigns have been part and parcel of democratic elections since the birth of democracy. However, in the run-up to the 2019 Indonesian elections, Grace Natalie, head of the Indonesian Solidarity Party (PSI), had to contend with a particularly insidious line of attack.
An anonymous Twitter account, simply named “Hulk”, accused her of having an extra-marital affair with Pak Ahok, the former governor of Jakarta. The accuser claimed to have access to a sex tape, which they threatened to make public.
Grace took to social media, challenging the user to post the video within 24 hours. “If Hulk fails to release the video in 24 hours beginning now, then it will prove that his claims are just fictional.”
The user did not upload a video. Grace was vindicated and following a wave of public scrutiny, Hulk deleted their account. However, what would have happened if the online troll had released a video? What if the video featured Grace’s voice, body and mannerisms? Would anyone believe it wasn’t her?
Sexuality has long been employed as a weapon against female politicians
Male political candidates are often subjected to malignant smear campaigns and character assassinations. In the 2019 presidential election, Prabowo accused Jokowi of harbouring communist beliefs, questioned his ancestry, and attacked his religious piety.
But for all the dirty tricks men have to deal with, ASEAN’s female candidates have to deal with all of these lines of attack, as well as having their female sexuality weaponized and used against them.
In the years between 1989 and 2010, when Myanmar’s leader Aung San Suu Kyi was living under house arrest, the military junta circulated pamphlets referring to her as a “genocidal prostitute”. Even as recently as 2018, a pro-military newspaper columnist referred to the civilian leader as a “power-mad prostitute”.
In the Philippines, President Rodrigo Duterte called Senator Leila de Lima an “immoral woman”. His allies threatened to release an alleged sex video in his possession after she opened a senate investigation into extrajudicial killings carried out as part of his drug war.
Sexualised misinformation is a real and potent danger
Sexual threats and harassment are amplified during political campaigning. Comments on videos published online featuring female political candidates frequently feature sexually aggressive language. Female candidates are labelled “whores” and threats of rape are common. This has a smothering effect of female participation in democracy.
Both Leila de Lima and Grace’s cases are examples of female sexuality being weaponized by their political opponents. In both cases, the videos never surfaced, and it is unlikely that they ever existed. But neither saw their accusers held to account for spreading sexualised misinformation designed to smear their opponent.
As “deepfake” technology improves, misinformation campaigns will have a new, powerful weapon to exploit female sexuality and undermine political opponents. Empowered by a climate of indifference towards sexualised misinformation, what would stop a political campaign harnessing modern technologies to manufacture fake sex tapes of political opponents to derail their bid for office?
Digital technology will facilitate the spread of sexualised misinformation
Deepfake techniques employ artificial intelligence (AI) technology to manipulate images, video, and audio to create fake content. Deepfakes replicate the look and sound of real human speech and movement, allowing users to create realistic videos of people doing and saying things they never did or said.
The use of deepfake technology represents a new threat to democracy; in particular, it poses a heightened threat to female democratic participation. With sexualised misinformation already a prominent and accepted component of political smear campaigns, the rise of deepfake technology has grave implications for future female candidates.
Hulk’s silence in the wake of Grace’s public challenge revealed the emptiness of their political attacks. Her unwavering confidence earnt Grace the trust and respect of Indonesian netizens. But if Hulk had responded with a faked video of Grace’s head imposed on a pornographic actress’s body, the onus would have been on Grace to prove the video was not real. Depending on the technology employed, this could have been an impossible task.
Cheaper, more advanced technology is making deepfake technology more accessible
Deepfakes are getting easier to make and harder to detect. Deepfake technologies are advancing faster than detection software capabilities, making it difficult to detect when a video or image has been altered.
Even if algorithms could accurately detect when a video has been doctored, many netizens are unaware of the existence of deepfake technology. In these cases, just seeing a deepfake video can drastically alter a viewer’s perception of a political candidate.
In 2014, a doctored image of Aung San Suu Kyi wearing a hijab was uploaded to Facebook and widely shared. At the time, tensions between the country’s Buddhist majority and Muslim minority were running high, leading to heightened levels of anti-Muslim sentiment. The image prompted outrage on social media as Myanmar’s netizens instantly believed the veracity of the image.
The episode demonstrated that the region is unprepared for the potential disruption deepfake technology can wreak on ASEAN’s young democracies. As women fight for increased political representation, deepfake technology poses another major threat to their political candidacies.
Regulation will only go so far
Part of the issue with combating deepfakes is the limited impact of regulations. The anonymity granted to netizens online would allow a user to create and distribute a lewd deepfake video featuring a female political candidate without the threat of reprisal. Strict libel and privacy laws may deter some actors but without the ability to identify offenders, the regulatory effects will be limited.
In places like the Philippines where the executive and his allies have used sexualised misinformation as a political weapon, regulation is also unlikely to be forthcoming.
It is also unrealistic to expect tech firms to regulate their platforms. Facebook proved ineffective in detecting and removing misinformation that led to the ethnic cleansing campaign in Myanmar. Even though major sites like Facebook and Twitter do not permit lewd content on their platforms, there will always be other platforms operated by regimes actively disseminating disinformation that will host sexually explicit imagery designed to attack female candidates.
The strongest avenue to counter the spread of sexualised misinformation is through public education and awareness. Educating netizens on the importance of scepticism will keep them vigilant when confronted with deepfake videos and images.
Unfortunately, changing attitudes is not a quick fix. As much of the region’s population is still coming online, persuading new netizens that the digital mine of information at their fingertips cannot always be trusted, even when videos appear to provide irrefutable evidence, will be no small feat.
None of this bodes well for the future of female democratic participation in ASEAN. Deepfakes will add a new, stinging dimension to the sexual harassment and sexualised misinformation campaigns targeting female candidates.