showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value == 0

External Research Grants

CY 2023
TrustedSEERs: Trusted Intelligent Work Bots for Engineering Better Software Faster
Principal Investigator: David Lo
School of Computing and Information Systems
Funding Source: National Research Foundation
Project Synopsis: 

This project will pioneer approaches that realize trusted automation bots that act as concierges and interactive advisors to software engineers to improve their productivity as well as software quality. TrustedSEERs will realize such automation by effectively learning from domain-specific, loosely-linked, multi-modal, multi-source and evolving software artefacts (e.g., source code, version history, bug reports, blogs, documentation, Q&A posts, videos, etc.). These artefacts can come from the organization deploying the automation bots, a group of collaborating yet privacy-aware organizations, and from freely available yet possibly licensed (e.g., GPL v2, GPL v3, MIT, etc.) data contributed by many, including untrusted entities, on the internet. TrustedSEERs will bring about the next generation of Software Analytics (SA) – a rapidly growing research area in the Software Engineering research field that turns data into automation – by establishing two initiatives: First, data-centric SA, through the design and development of methods that can systematically engineer (link, select, transform, synthesize, and label) data needed to learn more effective SA bots from diverse software artefacts, many of which are domain-specific and unique. Second, trustworthy SA, through the design and development of mechanisms that can engender software engineers’ trust in SA bots considering both intrinsic factors (explainability) and extrinsic ones (compliance to privacy and copyright laws and robustness to external attacks). In addition, TrustedSEERs will apply its core technologies to synergistic applications to improve engineer productivity and software security.

CY 2023
Public Cleanliness Satisfaction Survey
Principal Investigator: Paulin Tay Straughan
School of Social Sciences
Funding Source: Ministry of Sustainability and the Environment (MSE)
Project Synopsis: 

(This is additional funding to SMU for the existing research project.) MSE and SMU are collaborating to conduct the Public Cleanliness Satisfaction Survey (PCSS), an annual national household survey that aims to measure and track Singaporeans’ satisfaction and perceptions towards public cleanliness and public hygiene. Findings from the survey will aid in identifying key areas of concern and recommendations which are policy or operational in nature, to improve the public’s levels of satisfaction of public cleanliness, public hygiene and/or public cleaning services.

CY 2023
Unleashing the Power of Pre-trained Models for VisualQA: A Skill-based Framework
Principal Investigator: Jiang Jing
School of Computing and Information Systems
Funding Source: Ministry of Education’s Academic Research Fund Tier 2
Project Synopsis: 

Consumers have widely used conversational AI systems such as Siri, Google Assistant and now ChatGPT. The next generation of conversational AI systems will have visual understanding capabilities to communicate with users through language and visual data. A core technology that enables such multimodal, human-like AI systems is visual question answering and the ability to answer questions based on information found in images and videos. This project focuses on visual question answering and aims to develop new visual question-answering technologies based on large-scale pre-trained vision-language models. Pre-training models developed by tech giants, particularly OpenAI, have made headlines in recent years, e.g., ChatGPT, which can converse with users in human language, and DALL-E 2, which can generate realistic images. This project aims to study how to best utilise large-scale pre-trained vision-language models for visual question answering. The project will systematically analyse these pre-trained models in terms of their capabilities and limitations in visual question answering and design technical solutions to bridge the gap between what pre-trained models can accomplish and what visual question answering systems require. The end of the project will be a new framework for building visual question-answering systems based on existing pre-trained models with minimal additional training.

CY 2023
Mobile-friendly Data Visualization
Principal Investigator: Wang Yong
School of Computing and Information Systems
Funding Source: Ministry of Education’s Academic Research Fund Tier 2
Project Synopsis: 

Data visualisations have been widely used on mobile devices (e.g., smartphones), but they suffer from mobile-friendly issues in terms of their creation and usage. This project aims to develop novel techniques to achieve mobile-friendly data visualisations, including desirable mobile data visualisation creation and effective multimodal interaction design. The research outputs of this project will significantly improve the effectiveness and usability of mobile data visualisations and further promote their applications.

CY 2023
Food Recognition: Causality-driven Cross-modal Cross-lingual Domain Adaptation
Principal Investigator: Ngo Chong Wah
School of Computing and Information Systems
Funding Source: Ministry of Education’s Academic Research Fund Tier 2
Project Synopsis: 

This project aims to improve the scalability of food recognition – to train classifier(s) that recognise a wide range of dishes regardless of cuisines, the amount and type of training examples. Here, “classifier” can be viewed as a “search engine” that retrieves the recipe of a food image. Training such classifiers requires an excessive number of training examples composed of recipes and images, where each recipe is paired with at least an image as visual reference. Training classifiers using paired or parallel data faces several practical limitations – tens of thousands of recipe-image pairs are required for training; other forms of data that are largely available in the public cannot be leveraged for model training; and additional training data is required when the recipes are written in different natural languages. Through the project, these practical limitations will be addressed from the perspective of transfer learning. The aim is to train a generalised classifier that is more adaptable for recognition, by removing the statistical bias, considering the evolving process, and aligning the semantics of different languages in machine learning.

CY 2023
Executable AI Semantics for AI Framework Analysis
Principal Investigator: Sun Jun
School of Computing and Information Systems
Funding Source: Ministry of Education’s Academic Research Fund Tier 2
Project Synopsis: 

This project aims to provide a solid foundation for analysing AI systems as well as techniques used to facilitate the development of reliable secure AI systems. Central to the research is to develop an executable specification in the form of an abstract logical representation of all components that are used to build artificial intelligence, which subsequentially enables powerful techniques to address three problems commonly encountered in AI systems, namely, how to ensure the quality or correctness of AI libraries, how to systematically locate bugs in neural network programs, and how to fix the bug. In other words, this project aims to define a semantics of AI models, thereby forming a solid fundamental to build AI systems upon.

CY 2022
Text Style Transfer with Pre-Trained Language Models
Principal Investigator: Jiang Jing
School of Computing and Information Systems
Funding Source: DSO National Laboratories
Project Synopsis: 

Text style transfer (TST) is the task of converting a piece of text written in one style (e.g., informal text) into text written in a different style (e.g., formal text). It has applications in many scenarios such as AI-based writing assistance and removal of offensive language in social media posts. Recent years, with the advances of pre-trained large-scale language models such as the Generative Pre-trained Transformer 3 (GPT-3) which is an autoregressive language model that uses deep learning to produce human-like text, solutions to TST are now shifting to fine-tuning-based and prompt-based approaches. In this project, we will study how to effectively utilize pre-trained language models for TST under low-resource settings. We will also design ways to measure whether solutions based on pre-trained language models can disentangle content and style.

CY 2022
AI Audits for Who? Asian Perspectives on Rebuilding Public Trust via Community Ethics and Conflict Resolution Mechanisms
Principal Investigator: Willow Wong
Centre for AI and Data Governance
Funding Source: Notre Dame-IBM Technology Ethics Lab
Project Synopsis: 

The governance of artificial intelligence (AI) to mitigate societal and individual harm through ethics-by-design calls for equal attention to responsible data use before public trust can be conferred to AI technologies. Since trust is fundamentally rooted in community relationships, AI regulators seeking public acceptance toward AI innovation must attend to community-centric pathways to integrate data subjects’ voices in AI ethical decision-making. While traditional actuarial methods in financial audits can indicate a diverse range of evidence used to determine legal compliance, the researchers suggest that community interests and data subjects’ voices should not be absent in AI audit models. This research proposal will explore Singaporean (and Asian) perspectives on AI regulation to inform the motivations for using AI audits to rebuild public trust. Research analysis on the proposed scope and methodologies of AI audits will be followed by recommendations on the relevant skillsets for future AI auditors.

CY 2022
Weakly-supervised Semantic Segmentation and Its Applications in SAR Images
Principal Investigator: Sun Qianru
School of Computing and Information Systems
Funding Source: DSO National Laboratories
Project Synopsis: 

This project aims for learning efficient semantic segmentation models without using expensive annotations. Specifically, we leverage the most economical image-level labels to generate pseudo masks to facilitate the training of segmentation models. In the end, we will apply the resultant algorithms on tackling the remote sensing image segmentation in the challenging Continual, Few-shot, and Open-set Datasets.

CY 2022
Building group cohesion through leader oratory and perceptions of the impact of speaker practices across different audience groups
Principal Investigator: Timothy Clark
Lee Kong Chian School of Business
Funding Source: Temasek Laboratories at Nanyang Technological University (TL@NTU)
Project Synopsis: 

For the Singapore leader the final audience is always larger than the physical audience at a particular venue. The importance of leadership oratory is not confined to live co-present audiences, as wider audiences have long viewed political and organisational leaders’ speeches via television (and radio) and the use of various recording technologies (VHS, DVD). Recently, it has become common for speeches to be broadcast live on the internet and/or disseminated via online video. As a result, they can be viewed by potentially vast and diverse national and global audiences at different times, in a wide variety of contexts, using a range of devices (Wenzel and Koch, 2018; Rossette-Crake, 2020). According to Rossette-Crake (2020), since the turn of the century, it has become standard practice for speeches to be written and delivered with this in mind, and this is leading to changes that are akin to the way in which political oratory was transformed by radio and television during the 20th century (Greatbatch and Clark, 2005). Building on these points, this research project seeks to establish which oratorical practices are associated with positive persuasive outcomes and inspire trust and a sense of group cohesiveness amongst members of diverse audiences. It will answer two questions: (1) What are the verbal and non-verbal practices associated with establishing trust and a sense of group cohesiveness among members of diverse audiences during live speeches, and (2) How do the diverse audience members perceive the impact of these practices and whether the themes of the speeches also influence their perceptions?