showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value == 0

External Research Grants

CY 2025
Human Workers and Resource Allocation Optimization
Principal Investigator: Wang Hai
School of Computing and Information Systems
Funding Source: Singapore-MIT Alliance for Research and Technology Centre
Project Synopsis: 

The objective of the proposed project is to explore, in close collaboration with a local air transport hub, the development, validation and testing of an integrated set of models, algorithms, and tools that will support the Stand Assignment Process, considering impacts on the activities and behavior of passengers within the terminals. The project will also assess the likely impacts of a new AI-based system on the range of affected stakeholders, involve managers and staff in the design process, and train them in the use and management of this technology. Similar use cases with a ride-hailing service provider are being explored.

CY 2025
AI-Enhanced Online Learning
Principal Investigator: Archan Misra
School of Computing and Information Systems
Funding Source: Singapore-MIT Alliance for Research and Technology Centre
Project Synopsis: 

This project targets human capital development through AI-driven learning, with a focus on both childhood and adult learners. SMU researchers will develop AI-based tutoring technologies that enhance engagement and support during self-paced learning sessions. The project includes collaboration with organizations such as Yayasan Mendaki and SMU Academy. Key objectives are to capture multi-modal learner queries – visual, verbal, and gestural – using advanced sensors, and to build AI models for interactive question answering and generation in response to such queries. Focusing initially on mathematics problems, these models will also adapt the learning content (while formally assuring the correctness of auto-generated new content) based on assessments of learners’ current levels of competency and capability. The goal is to create new AI-powered online platforms to improve learning outcomes and personalize educational experiences across diverse learner populations.

CY 2025
Optimizing Multi-Modal Human Machine Interaction & Embodied AI
Principal Investigator: Archan Misra
School of Computing and Information Systems
Funding Source: Singapore-MIT Alliance for Research and Technology Centre
Project Synopsis: 

This project focuses on enabling immersive AI-assisted human-robot collaboration in dynamic industrial environments such as aviation and marine maintenance. Assistive agents deployed in robots or other wearable devices must comprehend and respond to human-issued instructions involving spatial and temporal references, adapting their behaviour in real-time. SMU researchers aim to develop lightweight, energy-efficient AI models and pervasive systems that support comprehension of such multi-modal instructions – using visual, verbal, and gestural cues– and relate them to the 3D environment captured using sensors like RGB video, LIDAR, and neuromorphic cameras. Objectives include optimizing the execution of grounding tasks (associating instructions with specific real-world objects) for moving objects using video data and developing light-weight techniques for enhanced robotic spatial reasoning and planning (e.g., navigation to retrieve specific objects). These innovations will allow robotic agents to better interpret human commands and improve task execution, ultimately enhancing safety, productivity, and the adaptability of joint human-robot collaborative work in real-world settings.

CY 2025
Sensors In-Home for Elder Wellbeing (SINEW)
Principal Investigator: Tan Ah Hwee
School of Computing and Information Systems
Funding Source: Sengkang General Hospital
Project Synopsis: 

(This is additional funding to SMU with a project extension.)

This project, led by A/Prof Iris Rawtaer (Sengkang General Hospital) aims to utilise multimodal sensor networks for early detection of cognitive decline. Under this project, the SKH team will oversee the project operations, screening recruitment, psychometric evaluation, data analysis, data interpretation, reporting and answer of clinical research hypotheses. The SMU team will collaborate with SKH to provide technical expertise for this study by ensuring safe implementation and maintenance of the sensors in the homes of the participants, provide the sensor obtained data to the clinical team and apply artificial intelligence methods for predictive modelling.

CY 2025
Towards Building Unified Autonomous Vehicle Scene Representation for Physical AV Adversarial Attacks and Visual Robustness Enhancement (Stage 1b)
Principal Investigator: Xie Xiaofei
School of Computing and Information Systems
Funding Source: AI Singapore’s Robust AI Grand Challenge
Project Synopsis: 

(This is additional funding to SMU for Stage 1b of the project.)

State-of-the-art visual perception models in autonomous vehicles (AV) fail in the physical world when meeting adversarially designed physical objects/environmental conditions. The main reason is that they are trained with discretely-sampled samples and can hardly cover all possibilities in the real world. Although effective, existing physical attacks consider one or two physical factors and cannot simulate dynamic entities (e.g., moving cars or persons, street structures) and environment factors (e.g., weather variation and light variation) jointly. Meanwhile, most defence methods like denoising or adversarial training (AT) mainly rely on single-view or single-modal information, neglecting the multi-view cameras and different modality sensors on the AV, which contain rich complementary information. The above challenges in both attacks and defenses are caused by the lack of a continuous and unified scene representation for the AV scenarios. Motivated by the above limitations, this project firstly aims to develop a unified AV scene representation based on the neural implicit representation to generate realistic new scenes. With this representation, we will develop extensive physical attacks, multi-view & multi-modal defenses, as well as a more complete evaluation framework. Specifically, the project will build a unified physical attack framework against AV perception models, which can adversarially optimize the physical-related parameters and generate more threatening examples that could happen in the real world. Furthermore, the project will build the multi-view and multi-modal defensive methods including a data reconstruction framework to reconstruct clean inputs and a novel ‘adversarial training’ method, i.e., adversarial repairing that enhances the robustness of the deep models with guidance of collected adversarial scenes. Finally, a robust-oriented explainable method will be developed to understand the behaviors of visual perception models under physical adversarial attacks and robustness enhancement.

CY 2025
Quantum Computing for Fraud Detection
Principal Investigator: Paul Robert Griffin
School of Computing and Information Systems
Funding Source: Oversea-Chinese Banking Corporation
Project Synopsis: 

Retail banks use real-time monitoring, machine learning, security checks, and rule-based systems to detect fraud by spotting deviations from normal customer behaviour. Quantum computing could dramatically boost these systems by processing vast datasets faster, enhancing pattern recognition, anomaly detection, and encryption. This project will build on prior quantum innovations and partnerships to identify and validate quantum algorithms that outperform existing fraud detection methods in accuracy and speed. It will also estimate required quantum resources, protect intellectual property, and train staff in quantum techniques, aiming for commercial advantage and contributing valuable knowledge to the financial industry’s fight against fraud.

CY 2025
From Risk Identification to Risk Management: A Systematic Approach to Mitigating LLM Supply Chain Risks
Principal Investigator: Xie Xiaofei
School of Computing and Information Systems
Funding Source: CyberSG R&D Programme Office
Project Synopsis: 

Large Language Models (LLMs) are increasingly applied to sectors such as healthcare, finance, software development and autonomous driving. However, their complex and interconnected supply chains—including data pipelines, inference frameworks, software dependencies, and deployment infrastructures—introduce significant security, reliability, and ethical risks. These complexities amplify vulnerabilities and increase the potential for system-wide failures, necessitating a holistic, system-level approach to risk identification and mitigation. This project aims to systematically address these risks through three key objectives: (1) developing comprehensive risk assessment methodologies for the entire LLM supply chain, (2) designing LLM-specific cyber insurance products to mitigate potential losses, and (3) collaborating with industry partners to ensure practical adoption and real-world impact. By tackling these challenges, the project will enhance the trustworthiness, security, and sustainability of LLM deployment across critical domains.

CY 2025
Trustworthy Multimodal Foundation Models: A Scalable Multi-Agent Approach
Co-Principal Investigator: Liao Lizi
School of Computing and Information Systems
Funding Source: AI Singapore's National Multi-modal LLM Programme Research Grant Call
Project Synopsis: 

This project tackles critical challenges in the development and deployment of Multimodal Large Foundation Models (MLFMs), which are capable of understanding and generating content across text, image, video, and audio modalities. While current MLFMs exhibit impressive performance, their trustworthiness, accuracy, and high operational costs limit their accessibility—especially for smaller research groups and organizations.


To address these gaps, the project focuses on two key innovations: (1) developing fine-grained "super alignment" techniques to reduce hallucinations and ensure model outputs align with human values, and (2) creating a scalable, low-cost multi-agent framework composed of smaller specialized models (8-15 billion parameters) that work collaboratively on complex tasks. These innovations will be powered by Reinforcement Learning with Human Feedback (RLHF) and Reinforcement Learning with AI Feedback (RLAIF), enabling continuous refinement and adaptation.


The research will be validated through real-world applications such as video generation and multimodal chatbots, demonstrating both practical utility and cross-domain adaptability. Ultimately, this work aims to democratize access to advanced AI, supporting Singapore’s strategic goal of building inclusive, trustworthy, and globally competitive AI capabilities.

This research/project is supported by the National Research Foundation, Singapore under its National Large Language Models Funding Initiative (AISG Award No: AISG-NMLP-2024-002). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.

CY 2025
Leveraging Foundation Models for Aircraft Surface Inspection in Open Environments
Principal Investigator: Pang Guansong
School of Computing and Information Systems
Funding Source: Agency for Science, Technology and Research's Individual Research Grants & Young Individual Research Grants
Project Synopsis: 

Aircraft surface inspection is typically done in an open environment, where the inspection model can be challenged by a lack of annotated defect samples, incomplete knowledge about possible defect types, and varying nature conditions or surface appearance. Vision-and-language foundation models have shown unique advantages in handling these challenges in various vision tasks. This project aims to develop innovative approaches to adapt such foundation models for addressing these challenges in aircraft surface inspection. The resulting models will help largely reduce aircraft maintenance cost and alleviate the risk of having unnoticed defects due to time/manpower limitation in aircraft inspection in Singapore.

CY 2025
FoCo: Fast, Communication- and Memory-Efficient Optimizers for Training Large AI Models
Principal Investigator: Zhou Pan
School of Computing and Information Systems
Funding Source: Ministry of Education Academic Research Fund Tier 2
Project Synopsis: 

This proposed project, FoCo, aims to develop Fast, cOmmunication-, and memory-effiCient Optimizers, specifically targeting the main identified problems of currently popular optimizers like Adam and AdamW in training large AI models, as follows:

a. FoCo will develop faster optimizers to reduce the training time and thus the cost of large AI models.
b. FoCo will design an Adaptive and COMpensate compression Approach called Acoma to reduce the communication costs of our faster optimizer and other optimizers like Adam.
c. FoCo will develop a memory-efficient approach called MeMo to lower the GPU memory of our faster and communication-efficient optimizer and other optimizers like Adam.

Although FoCo’s three main objectives focus on different aspects of training large AI models, they all work towards the common goal of making large AI training more efficient and faster. Improvements in one area will positively impact the others. Given the increasing importance and widespread use of large AI models, addressing their current training challenges is crucial. High training costs, long development times, and significant energy consumption and emissions are major concerns. By making AI training more efficient, FoCo will not only advance the field of AI but also contribute to a more sustainable and resource-efficient future. This project will benefit academia, industry, and society by enabling faster and more cost-effective development of advanced AI technologies.