showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value == 0

External Research Grants

CY 2025
Trustworthy Multimodal Foundation Models: A Scalable Multi-Agent Approach
Co-Principal Investigator: Liao Lizi
School of Computing and Information Systems
Funding Source: AI Singapore's National Multi-modal LLM Programme Research Grant Call
Project Synopsis: 

This project tackles critical challenges in the development and deployment of Multimodal Large Foundation Models (MLFMs), which are capable of understanding and generating content across text, image, video, and audio modalities. While current MLFMs exhibit impressive performance, their trustworthiness, accuracy, and high operational costs limit their accessibility—especially for smaller research groups and organizations.


To address these gaps, the project focuses on two key innovations: (1) developing fine-grained "super alignment" techniques to reduce hallucinations and ensure model outputs align with human values, and (2) creating a scalable, low-cost multi-agent framework composed of smaller specialized models (8-15 billion parameters) that work collaboratively on complex tasks. These innovations will be powered by Reinforcement Learning with Human Feedback (RLHF) and Reinforcement Learning with AI Feedback (RLAIF), enabling continuous refinement and adaptation.


The research will be validated through real-world applications such as video generation and multimodal chatbots, demonstrating both practical utility and cross-domain adaptability. Ultimately, this work aims to democratize access to advanced AI, supporting Singapore’s strategic goal of building inclusive, trustworthy, and globally competitive AI capabilities.

This research/project is supported by the National Research Foundation, Singapore under its National Large Language Models Funding Initiative (AISG Award No: AISG-NMLP-2024-002). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.

CY 2025
Leveraging Foundation Models for Aircraft Surface Inspection in Open Environments
Principal Investigator: Pang Guansong
School of Computing and Information Systems
Funding Source: Agency for Science, Technology and Research's Individual Research Grants & Young Individual Research Grants
Project Synopsis: 

Aircraft surface inspection is typically done in an open environment, where the inspection model can be challenged by a lack of annotated defect samples, incomplete knowledge about possible defect types, and varying nature conditions or surface appearance. Vision-and-language foundation models have shown unique advantages in handling these challenges in various vision tasks. This project aims to develop innovative approaches to adapt such foundation models for addressing these challenges in aircraft surface inspection. The resulting models will help largely reduce aircraft maintenance cost and alleviate the risk of having unnoticed defects due to time/manpower limitation in aircraft inspection in Singapore.

CY 2025
FoCo: Fast, Communication- and Memory-Efficient Optimizers for Training Large AI Models
Principal Investigator: Zhou Pan
School of Computing and Information Systems
Funding Source: Ministry of Education Academic Research Fund Tier 2
Project Synopsis: 

This proposed project, FoCo, aims to develop Fast, cOmmunication-, and memory-effiCient Optimizers, specifically targeting the main identified problems of currently popular optimizers like Adam and AdamW in training large AI models, as follows:

a. FoCo will develop faster optimizers to reduce the training time and thus the cost of large AI models.
b. FoCo will design an Adaptive and COMpensate compression Approach called Acoma to reduce the communication costs of our faster optimizer and other optimizers like Adam.
c. FoCo will develop a memory-efficient approach called MeMo to lower the GPU memory of our faster and communication-efficient optimizer and other optimizers like Adam.

Although FoCo’s three main objectives focus on different aspects of training large AI models, they all work towards the common goal of making large AI training more efficient and faster. Improvements in one area will positively impact the others. Given the increasing importance and widespread use of large AI models, addressing their current training challenges is crucial. High training costs, long development times, and significant energy consumption and emissions are major concerns. By making AI training more efficient, FoCo will not only advance the field of AI but also contribute to a more sustainable and resource-efficient future. This project will benefit academia, industry, and society by enabling faster and more cost-effective development of advanced AI technologies.

CY 2025
Enhancing Generalizability and Explainability of Multi-Agent Reinforcement Learning (MARL)
Principal Investigator: Tan Ah Hwee
School of Computing and Information Systems
Funding Source:
Project Synopsis: 

This project shall (i) enhance the generalizability of hierarchical multi-agent learning and control framework for heterogeneous agents in a range of scenarios and (ii) develop algorithms to analyse and explain the learned behaviour models at the various levels.

CY 2025
Accurate, Low Latency, client-side, Indoor Location without fingerprinting or knowledge of AP locations
Principal Investigator: Rajesh Krishna Balan
School of Computing and Information Systems
Funding Source: Smart Nation Group's Translational R&D 2.0 Grant
Project Synopsis: 

In this project, the problem being addressed is to provide an accurate, low-latency, minimal maintenance indoor localisation solution to locate organisational resources. Our goal is to achieve this without any form of Wi-Fi fingerprinting, without any knowledge of the location of the Wi-Fi Access Points (AP), and without the availability of any maps of the indoor spaces being used. We plan to achieve this by leveraging the new 802.11mc Wi-Fi Fine-Time Measurement standard in pure 1-sided mode that allows time-of-flight measurements to be made between a client device and any AP. These measurements will then be used with inertial data to jointly optimise both the location of the device and the location of the APs.

CY 2025
FrankLM: Fact-check and report automation via neural knowledge Language Modeling
Principal Investigator: Gao Wei
School of Computing and Information Systems
Funding Source: AI Singapore Research Programme Grant Call 2024
Project Synopsis: 

The proposed project studies how to automate fact-checking (FC) based on neural language models. FC is the investigative process of verifying and reporting the accuracy of claims to help people make decisions based on facts rather than misinformation. Our proposed project, named FrankLM, targets Fact-check and report automation via neural knowledge Language Modeling to enable the applicability of LLMs for FC. We aim to improve task accuracy by 20% for claim verification and explanation generation, improve task accuracy by 15% for reasoning, and achieve over 90% of human performance in report generation. FrankLM will benefit FC and improve the accuracy, explainability, and trustworthiness of AI systems, and it will open new opportunities to apply them in various sectors including media, healthcare, finance, and education. 

This research/project is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG3-RP-2024-035).

CY 2025
Secure, Private, and Verified Data Sharing for Large Model Training and Deployment
Co-Principal Investigator: Xie Xiaofei
School of Computing and Information Systems
Funding Source: CyberSG R&D Programme Office
Project Synopsis: 

In this proposal, we consider a real-world setting that a large model trainer like OpenAI already holds large-scale training data, but it continuously needs more fresh data to update the model or produce specific downstream tasks. Such data sharing mechanism benefits both the model trainer and data provider who should be paid for his contribution. This motivates us to achieve the following four key objectives: (1) Private pre-processing of training data sharing for large model; (2) Secure and private regulation compliance inspection on shared training data with verification and proof; (3) Privacy-preserving dynamic fine-grained training data sharing; and (4) Privacy-preserving inference on large models.

CY 2025
ESG-based Responsible AI: Toward Green, Secure, and Compliant LLM Utilisation for Digital Service Development Process
Principal Investigator: David Lo
School of Computing and Information Systems
Funding Source: Agency for Science, Technology and Research
Project Synopsis: 

This research project, developed under the CSIRO - A*STAR Research-Industry (2+2) Partnership Program, aims to develop sustainable and responsible AI technologies, with a particular focus on large language models (LLMs). The project's objective is twofold: enhancing environmental sustainability and ensuring compliance with governance standards.

CY 2024
Conversational Health AI for Mental Health
Principal Investigator: Lim Ee Peng
School of Computing and Information Systems
Funding Source: Singapore Ministry of Health through the National Medical Research Council (NMRC) Office, MOH Holdings Pte Ltd
Project Synopsis: 

The project conducts research on new conversational AI technologies that understand a user’s mental health conditions and enable a principled strategy to counsel the user. The research will focus on incorporating user personalisation and counselling strategies into the AI models. At the end of project, we hope to create a conversational AI framework that can automate mental health counselling and evaluate its performance.

CY 2024
Unleashing the Potential of Photoplethysmography for Wearable Healthcare
Principal Investigator: Ma Dong
School of Computing and Information Systems
Funding Source: Ministry of Education Academic Research Fund Tier 2
Project Synopsis: 

In the dynamic field of wearable health technology, our proposed research aims to revolutionise how we monitor our health using devices such as smartwatches and earbuds. These devices frequently employ photoplethysmography (PPG), a noninvasive technique that monitors changes in blood volume under the skin, providing valuable insights into cardiovascular health. However, real-world challenges, such as inaccuracies during physical activities and the impact of diverse body postures, impede the realisation of the full potential of PPG technology with respect to these wearable devices.

Our research focuses on a breakthrough concept: incorporating contact pressure (CP) into PPG measurements to address the aforementioned challenges. By analyzing the tightness of wearable devices against the skin, we aim to obtain valuable insights that can help reconstruct high-quality PPG data from the noisy PPG data. Our first contribution is the development of a wearable prototype capable of concurrently measuring CP and PPG. Using this prototype, we will develop intelligent algorithms to mitigate the effects of physical activities and body postures. Finally, we will optimise our energy efficiency and real-time processing methodologies, ensuring prolonged battery life for wearable devices.

We believe that these innovations can substantially improve the accuracy and reliability of health data obtained from wearables, thereby unlocking new capabilities of PPG in health monitoring. The success of this research has the potential to stimulate market growth by establishing a new standard for accuracy and capabilities in wearable devices. Considering the global aging population, our research will considerably impact elderly care, particularly cardiovascular diseases. By improving the reliability of wearable devices, our research can promote an active lifestyle and contribute to overall well-being among the general population.