showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value ==

SMU and South Korea to create seminal AI deepfake detection tool

SMU Associate Professor He Shengfeng is working on the first-ever multilingual system suitable for Asia, with commercialisation prospects.

 

By Christie Loh

SMU Office of Research Governance & Administration – In a coup for Singapore Management University (SMU), a team led by Associate Professor of Computer Science He Shengfeng has edged out competing research institutions to clinch a grant for developing a groundbreaking deepfake detection system.

The Artificial Intelligence (AI) project, when completed in an estimated three years’ time, promises to have widespread commercial applications. It would also be the first multilingual deepfake data set that includes dialectal variants such as Singlish and Korean dialects.

“Many existing tools don’t perform well on Asian languages, accents, or content,” Professor He told SMU’s Office of Research Governance & Administration (ORGA) in an email interview. “We’re focused on building something that fits the specific needs of our region.”

A detection tool that understands different linguistic, socio-cultural and environmental characteristics was a key requirement in the grant call issued in March 2025 by AI Singapore (AISG) and South Korea’s Institute for Information & Communication Technology Planning & Evaluation (IITP). AISG is a Singapore Government-wide initiative with several coordinating agencies. Under the grant, a bilateral research team would produce a tool suited to Singaporean and South Korean contexts.

SMU’s partner is Sungkyunkwan University (SKKU) whose Principal Investigator is Associate Professor Doowon Jeong. The collaboration is one that creates “a strategic attack and defense loop”, said Professor He, whereby the Singapore side focuses on creating and spotting fake videos, while the South Koreans zoom in on whether a video is real and on tracing its history.

“So we’re building a cycle where one team learns to make and detect, and the other strengthens the tools to verify and protect. It’s like testing and improving a security system by acting as both the attacker and defender,” Professor He explained.

How DeepShield will break new ground

The team has named their proposed system DeepShield, which aims to make several breakthroughs in the global race to combat realistic fake media that have been used to spread misinformation, fraud and identify theft.

“Unlike prior work narrowly focused on facial deepfakes, we introduce the first unified interpretable detection system capable of handling diverse and multi-modal manipulations – including object insertions, lighting alterations, background swaps, and voice dubbing – within a single, explainable pipeline,” according to their proposal paper.

Second, the team plans to create the first invertible embedding framework for video forensics, embedding invisible yet reversible signatures into edited content. “This enables not only tamper detection but full content restoration without extra storage – offering a breakthrough in traceable AI-generated media and digital provenance,” said Professor He.

Third, the system will be “inherently localised”, supporting dialect-aware detection tailored for deployment in culturally diverse regions like Singapore and Korea. This ensures that detection is not biased toward English or Western content, the team said in a video presentation of their proposal.

Overall, DeepShield aims to position itself as “not merely a detection tool, but a next-generation AI governance layer for digital media integrity – setting it apart from commercial offerings in both ambition and design”.

Work commences in January 2026. The team will begin scouring large-scale, publicly available datasets such as the YouTube8M dataset, which contain videos that are non-personal, diverse in content and widely used in academic research. They target to collect around 200,000 annotated video clips, which will be screened by AI tools as well as human staff who will verify samples for clarity, relevance, and public appropriateness, said Professor He.

As for deepfake versions, he added, they will be generated by the researchers themselves to enable full control over what was modified and to allow accurate annotation. “This setup allows us to scale data collection while maintaining transparency and quality control,” he said.

Roping in industry big names

Crucial to success is the involvement of industry players in the development and testing phases. 

One is Singapore-based Ensign InfoSecurity, the largest cybersecurity service provider in Asia, which will support a testbed simulating telecom and public sector video stream screening. In South Korea, SKKU will collaborate with Deepbrain AI, a generative AI company that specialises in hyper-realistic AI avatars, to evaluate the system in a cloud-based setting for enterprise media applications. 

Their combined involvement “ensures our system is tested in high-traffic, user-facing platforms particularly for news verification and short-form video integrity”, said the bilateral team.

If all proceeds smoothly, the team envisions “a start-up spin-off that would offer services such as deepfake forensics, media authenticity verification, enterprise compliance, digital governance platforms”. In addition, they said, there may be licensing opportunities to governments, banks, media platforms such as TikTok and Tencent, and AI auditing agencies across Asia.

The ambitious project is “more complex” than anything he has attempted, said Professor He, who was named in 2023 and 2024 among the world’s top two-percent most-cited scientists using citation metrics (excluding self-cites) in the annual lists compiled by Stanford University and Elsevier.

“We’re not just building a new algorithm in the lab,” he said. “We’re working across countries, cultures, and languages, and involving both academic teams and companies. We have to think about data collection, system design, real-world testing, and even policy implications. That makes it more demanding, but also more meaningful.”

 

Back to Research@SMU November 2025 Issue