showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value ==

Fixing AI systems

By Alvin Lee

SMU Office of Research & Tech Transfer – For the longest time, owners of Tesla cars have complained about “phantom braking”, the phenomenon of their vehicles suddenly stopping in response to imagined hazards of oncoming traffic or stationary objects on the roads. Yet when the company recalled a version of its Full Self-Driving software in October 2021, complaints over “phantom braking” jumped to 107 over the next three months compared to just 34 in the preceding 22 months.

Tesla’s troubles, which included the recent 54,000-vehicle recall over disobeying ‘Stop’ signs, underline the difficulty in fixing the neural networks that power the self-driving artificial intelligence (AI) systems. At their core, neural networks are fundamentally unlike human-written if-then-else computer programs that can be picked apart and fixed, line by line.

“Neural networks don’t work that way,” observes Sun Jun, Professor of Computer Science at Singapore Management University (SMU). “Even if we see a wrong result, you have no idea what's going on. Furthermore, if I see that there's a security threat, how do I patch the system so that it's secure?”

The project                                                              

That issue forms the core of Professor Sun’s project “The Science of Certified AI Systems”, for which he has secured an MOE Academic Research Funding Tier 3 grant. The project aims to develop:

  1. A scientific foundation for analysing AI systems;
  2. A set of effective tools for analysing and repairing neural networks; and
  3. Certification standards which provide actionable guidelines.

One area the project seeks to address is the robustness of AI systems, which Professor Sun illustrates with a simple example.

“If I have a face recognition software, and we feed it a picture of Barack Obama, the AI system should identify it as Barack Obama. If I change just one or two pixels on an image, to the human eye it doesn't change anything. But to the neural network, suddenly it might identify the image as that of Donald Trump, nothing like the original picture.

"Just by the difference of one pixel, you can change the label. That’s a problem of robustness.”

The traditional way of addressing such a problem in computer programs would be through looking at each line of code to find a causal chain to establish causality, and then fix the code. In neural networks, because countless neurons or nodes interact with one another to produce the final result, it is near impossible to accurately identify a single neuron as the cause for a wrong result.

“In the case of neural networks, every neuron participated in producing a result. So you can basically say, ‘With this wrong result, every neuron is responsible.’ explains Professor Sun, who is also the Deputy Director of SMU’s Research Lab for Intelligent Software Engineering.

“[In this project] We try to measure which neurons are more responsible for producing this outcome, and then trace back to the ones that are impacting the final probability distribution more. In the end I could say, ‘These neurons are consistently and significantly contributing to the wrong results’ and that might be the most important neurons that we should look at. If you change these neurons, maybe it will somehow fix the output.”

Solving problems

While more organisations are using or exploring to use AI and neural networks to enhance their performance, the costs involved often lead decision makers to adopt an open-source solution instead of building it from scratch.

Professor Sun notes that it is “pretty easy to basically embed some malicious neurons into a neural network (i.e., a Trojan horse)”. In the case of facial recognition software, it could be easily tricked with what is known as a ‘backdoor’ to recognise an unauthorised person as someone within the organisation.

How then can the project help address such issues and challenges?

“We'll be producing a set of software toolkits to tell you whether your neural network is robust, whether it may potentially contain backdoors,” Professor Sun tells the Office of Research and Tech Transfer. “Or we can certify your neural network is free of certain attacks.

“Another way would be fixing neural networks. We could produce software such that if you give me a neural network and suspected security problems it might have, I could make your neural network more robust and secure.”

Professor Sun reveals that a global technology company has been in touch with him to fix its neural networks. The dream, he says, is to create a “whole framework for developing neural networks and AI systems in general so that you can build your robust, secure AI systems on top of our fundamental framework”.

People matter

Despite the newfangled technology attracting all the attention, Professor Sun singled out the human aspect that often gets lost in such discussions.

“All these neural networks are trained on data collected by humans,” Professor Sun points out. “If we want to be able to develop the AI systems which are indeed secure, we have to look at the process as well. We must tell our human experts how to collect data, clean data, test the system, follow rigid protocols. This will help us to eliminate human errors.”

Back to Research@SMU Feb 2022 Issue