A recent report highlighted how hackers could manipulate self-driving cars into performing dangerous manoeuvres. Does AI have a safety problem? Can the natural world offer any solutions?
Earlier this month, Tencent Keen Security Labs exposed flaws in Tesla’s self-driving cars. The scariest part? Their team forced the cars to make dangerous manoeuvres by placing stickers on the road. These stickers misled the cars and caused them to change lanes unnecessarily, sending the cars into the reverse lane, which, in a real-world scenario, would carry oncoming traffic.
Tesla founder Elon Musk once said: “Mark my words, AI is far more dangerous than nukes.” His comments, as well as Tencent Keen’s tests, highlight that although those in the artificial intelligence (AI) field have made huge progress, there are still enormous challenges to overcome.
AI struggles to handle unique situations

Photo Credit: Steve Jurvetson/Wikimedia Commons
Singapore wants self-driving buses on its roads by 2022. Its government has invested billions of dollars into research and development. Grab wants its taxis in the region to be driverless before then.
However, there are doubts about the safety of these vehicles. Critics have pointed to the cars’ inability to register and safely deal with unique situations. No two car journeys will ever be the same. Can machine learning prepare a computer to react in the same way a human will?
Although the tests on self-driving cars highlighted flaws, Tesla claimed they were not serious issues. Their team argued that the driver would quickly override the AI in the situations presented. Up to a point, their claims are valid. However, if the goal of autonomous vehicles is to allow people to travel without sitting in the driving seat, then any such flaw is dangerous.
If a car is designed to be completely autonomous, humans travelling in it will not anticipate taking control. They will display slower reaction times if they have to. Humans will disengage themselves from driving if they think the car is in control.
“Advancements in AI and machine learning may find solutions to solve the issue of collisions that occur due to a technical error, but they are still not able to resolve the technology’s vulnerability to hacking,” Margherita Pagani, Director, AIM Research Centre on Artificial Intelligence in Value Creation and Co-Director MSc in Digital Marketing and Data Science at Emlyon Business School, told ASEAN Today. “This may have catastrophic implications in case terrorist hackers access the databases of some self-driving cars.”
In ASEAN, AI development has pressed forward at pace
If AI does run into trouble, then Southeast Asia has a lot to lose. The industry was worth US$450 million in Asia-Pacific in 2017. “Singapore has made the greatest advances in the implementation of AI in sectors such as transportation, financial services, healthcare and media but there are also promising early signs in Malaysia and Vietnam,” Pagani detailed.
In Indonesia, 24.6% of business organisations have adopted some form of AI. Next comes Thailand (17.1%), Singapore (9.9%) and Malaysia (8.1%). That trend is predicted to increase. “We expect investments in AI to continue to rise, as more organisations begin to understand the benefits of embedding AI into their business and how data and analytics can help uncover new insights,” said Chwee Kan Chua, global research director at IDC Asia Pacific.

Hacking will continue, but the scientists are fighting back
As developers are building AI systems to help make our lives easier and safer, hackers are trying to exploit them. Hackers have managed to manipulate AI systems used in healthcare. Others tricked AI into giving up credit card details.
Anything relying on computers or information technology is at risk. It does not matter whether a car is automated or not – it is a target. “As AI systems learn and adapt based on the data they receive, they are susceptible to new types of attacks such as adversarial learning that occurs when an adversary manipulates a feature,” Pagani explained.
However, scientists are fighting back. Hackers introduce ‘perturbations’, a small modification to an image that is not discernible to the human eye but readable by computers, to trick AI systems. This is the method Tencent Keen used. Scientists are working to develop ways of quickly identifying if a perturbation is present and not allowing them to manipulate the system’s behaviour.
Does nature have the answer?
Developers could apply a biological approach to machine learning. “Building systems that are robust against adversarial inputs requires the design of new machine-learning models,” Pagani added.
“Safer AI implies also recursive self-improvement or autonomous agents making increasingly better modifications to their own code,” Pagani continued.
“The kind of broad scenario-based defence we are looking to generate can be seen, for example, in the immune system, which identifies attacks, wins and remembers the attack to create a more effective response during future engagements,” outlined Dr Have Siegelman, director of the Guaranteeing AI Robustness against Deception (GARD) programme.
Others promote the idea of computational evolution whereby only the safest systems can propagate and continue to operate. However, there is much work still to do. “There remains an enormous amount to learn about the brain – and that is before trying to write the intensely complicated software that can emulate all those biological interactions,” wrote Michigan State University professor Arend Hintze.
Does AI have a problem or are there challenges to overcome?
Another way of looking at the issue is to reframe problems with AI as challenges to overcome and opportunities to improve. For example, if an AI car has an accident, there is some debate about whose fault that is. In Singapore, the government has made significant strides in regulating AI to anticipate and resolve such disputes.
“As the ways in which businesses use technology and data has evolved over recent years, so has the need for robust regulation that ensures that data is accessed and stored in a private and secure manner,” Chris Ganje, CEO and co-founder of AMPLYFI, a company which produces business intelligence products, told ASEAN Today.
There are other industries from which automated vehicle firms can learn. Pilots hand over control of flying aeroplanes to computers. However, when mistakes happen, airlines share information to avoid repeats. At any point, a pilot can override the AI that is flying the plane – as Tesla argues is the case with vehicles encountering unusual situations.
The Singaporean government does not expect to see automated vehicles on its roads for another 10 to 15 years. In that time, AI technology will advance, and further breakthroughs and pitfalls will appear. It is critical that the sector is well regulated – something Musk himself advocates.
Scientists and developers must stay one step ahead of hackers to ensure that the benefits AI bring outweigh the dangers. There is much merit in adopting a biological approach Siegelman identified. If developers could work together as a large immune system, identifying weaknesses and working together to create more effective responses for future engagements, a stronger, more resilient AI industry would emerge with fewer vulnerabilities.