showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value ==

On a mission to help defuse AI “danger”

SMU Associate Professor Hu Nan wins MOE grant to study companies’ ethical awareness of AI usage.

 

By Christie Loh

SMU Office of Research – What does the Mega-intelligent Machine gone-rogue look like? In popular imagination in the 1980s, it had the face of Arnold Schwarzenegger, the Hollywood actor who played a cyborg assassin in the blockbuster movie, The Terminator, where a highly advanced computer network system had turned against its human creators. Fast forward to present day: Artificial Intelligence (AI) doesn’t require physical form to do harm.

“AI does not need to put a gun to your head to force you to do something,” Hu Nan, Associate Professor of Information Systems at the SMU School of Computing and Information Systems, said in an interview with the Office of Research.

Because its tool is language, the power of persuasion and even manipulation, a force that could mobilise millions.

“I really see this danger aspect coming up. If the Large Language Models (LLM) can control the language, that means they don’t need to pull a gun on us, they can just use language to convince us,” he said. “People say, ‘Oh no problem, we can just unplug the power!’ But the problem with that is, AI might try to ask you if you really want to do that, ‘I’m your friend!’” (LLMs are a type of AI system that processes massive amounts of text data to understand and generate human language. Chat GPT is a household name among LLMs.)

The recent rise of AI has seen pundits generally divided into two camps: those who, like Professor Hu, believe AI may be sentient, capable of expressing human emotions in appropriate contexts; and those who do not.

And for a go-getter like Professor Hu, he wants to play a part in creating safeguards against a malevolent machine or humans who wish to use the machine for sinister goals. 

To that end, he is now pursuing an investigative project with the financial backing of the Ministry of Education (MOE) Academic Research Fund (AcRF) Tier 2 grant. Professor Hu’s three-year project is titled “Decoding Organisational Ethical Awareness: Unravelling the Formation and Consequences in the Context of Generative AI”.

In other words, he is trying to quantify a company’s ethical awareness of using Generative AI, and to examine the factors behind that awareness as well as the impact, both financial and non-financial, using econometric models and quasi-experiments. 

AI: It is not just computer science

While it is undoubtedly a feather in the cap for SMU to have won government funding, there is an additional feel-good effect in this case on technical grounds: Professor Hu’s grant comes under Expert Panel (EP) 4, which covers the discipline cluster of Accountancy, Business, Humanities and Social Sciences. This may raise some eyebrows, as proposals from Professor Hu’s academic field of Computer Science typically receive funding under EP2, which tracks Informatics and Mathematics.

To Professor Hu, the grant testifies to the wide scope of his study.

“A pure Computer Science project might focus on documenting the existing bias, which is a very important step. But for us, that is only the starting point,” he said. “We’re working on an inter-disciplinary project: it’s deep on the technology side, but also an emerging topic with so many social aspects.”

He explained that while Generative AI is poised to revolutionise corporates and society – having unlocked numerous applications including chatbots, translation services and content-generation tools – it is, however, a double-edged sword that has sparked ethical concerns such as misinformation, protection of privacy and job displacement. 

Hence, his project “provides a novel contribution to the promising directions for constructing responsible and ethical Generative AI systems in business, which in turn benefits society,” Professor Hu wrote in his grant proposal. 

He and his two co-collaborators – both of whom are his former PhD students now based in China as academics – will be doing a panel data analysis of thousands of listed companies by using AI to comb through their publicly available information such as annual reports, corporate results and conference calls with the media and shareholders. In total, the number of public-listed firms studied is expected to come to 1,305 in Singapore, 5,983 in China, and 21,741 in the United States (US).

The figure may be staggering to the layperson, but Professor Hu shrugs off the enormity of the task, as it is easily manageable by, say, an LLM. Ask him how exactly he will be analysing the deluge of data to discern trends, and he will gladly give you a detailed picture of methods available, including the possibility of “finetuning our own LLM”. 

But he also candidly admits that he and his team have yet to settle on the “best way” to quantify a company’s ethical awareness of AI deployment. Figuring it out is part of the research journey, he said. “That’s why we really appreciate the funding from MOE. I know in some other parts of the world, basically you just use the results you already have to apply for something, you want to make sure you can get something out, and the government will also say, ‘Ah, you show KPI [Key Performance Indicator].’”

In his case, he has the green light to try to unearth the unknown. “To us, innovation means sometimes you really need to be a bit crazy, you have to take the consequence that you might fail,” he said.

Building on real-world experiences

Professor Hu is no stranger to venturing from the ground up. After he first left SMU in 2009 partially for family reasons, he formed two start-ups in China, one starts with image-recognition which later raked in RMB130 million in net profit within three years, and one in financial technology (which is now about six years old).

At the same time, he continued research work, co-authoring a long string of articles published in various academic journals, right until he returned to SMU in 2023.

“What I found out in my journey is: the social aspect matters,” said Professor Hu, who gave up Chinese citizenship to become Singaporean in 2008, four years after he first joined SMU as an assistant professor.

Now that he is back and based in Singapore, the Little Red Dot is the focal point of his energies. On the one hand, his research project aims to make its findings relevant to Singapore, by conducting case studies of Singapore-listed companies and comparing them with counterparts in China and the US, among other planned analyses and local collaborations.

On the other hand, Professor Hu is not letting go of the practical realm: he is hoping to launch a start-up in Singapore to offer AI risk assessment and compliance services to businesses. This would serve any company that uses an AI component, such as app developers. He calls it “a tool for the real world to use”. Professor Hu would like to see angel investors come in, but even if none are immediately forthcoming, he is determined to launch a prototype before the end of the year, as businesses rush to adjust to new AI regulations that came into force in the European Union (EU) earlier in August. With the EU leading the charge on AI regulation, he said, it is only a matter of time before other jurisdictions ramp up their own efforts.

And race against time, he must. Said the 54-year-old entrepreneurial academic who talks fast, walks fast and professes a habit of kickstarting each day at 6am: “I want to change the world, even if just a little bit.”

 

Back to Research@SMU November 2024 Issue