Data visualisations have been widely used on mobile devices (e.g., smartphones), but they suffer from mobile-friendly issues in terms of their creation and usage. This project aims to develop novel techniques to achieve mobile-friendly data visualisations, including desirable mobile data visualisation creation and effective multimodal interaction design. The research outputs of this project will significantly improve the effectiveness and usability of mobile data visualisations and further promote their applications.
This project aims to improve the scalability of food recognition – to train classifier(s) that recognise a wide range of dishes regardless of cuisines, the amount and type of training examples. Here, “classifier” can be viewed as a “search engine” that retrieves the recipe of a food image. Training such classifiers requires an excessive number of training examples composed of recipes and images, where each recipe is paired with at least an image as visual reference. Training classifiers using paired or parallel data faces several practical limitations – tens of thousands of recipe-image pairs are required for training; other forms of data that are largely available in the public cannot be leveraged for model training; and additional training data is required when the recipes are written in different natural languages. Through the project, these practical limitations will be addressed from the perspective of transfer learning. The aim is to train a generalised classifier that is more adaptable for recognition, by removing the statistical bias, considering the evolving process, and aligning the semantics of different languages in machine learning.
This project aims to provide a solid foundation for analysing AI systems as well as techniques used to facilitate the development of reliable secure AI systems. Central to the research is to develop an executable specification in the form of an abstract logical representation of all components that are used to build artificial intelligence, which subsequentially enables powerful techniques to address three problems commonly encountered in AI systems, namely, how to ensure the quality or correctness of AI libraries, how to systematically locate bugs in neural network programs, and how to fix the bug. In other words, this project aims to define a semantics of AI models, thereby forming a solid fundamental to build AI systems upon.
Text style transfer (TST) is the task of converting a piece of text written in one style (e.g., informal text) into text written in a different style (e.g., formal text). It has applications in many scenarios such as AI-based writing assistance and removal of offensive language in social media posts. Recent years, with the advances of pre-trained large-scale language models such as the Generative Pre-trained Transformer 3 (GPT-3) which is an autoregressive language model that uses deep learning to produce human-like text, solutions to TST are now shifting to fine-tuning-based and prompt-based approaches. In this project, we will study how to effectively utilize pre-trained language models for TST under low-resource settings. We will also design ways to measure whether solutions based on pre-trained language models can disentangle content and style.
This project aims for learning efficient semantic segmentation models without using expensive annotations. Specifically, we leverage the most economical image-level labels to generate pseudo masks to facilitate the training of segmentation models. In the end, we will apply the resultant algorithms on tackling the remote sensing image segmentation in the challenging Continual, Few-shot, and Open-set Datasets.
This project aims to design a hierarchical cross-network multi-agent Reinforcement-Learning-based trading strategy generator and examines governance framework for crypto asset markets.
This proposal contributes to Thrust 3 of the National Quantum Computing Hub (NQCH) that is focused on translational R&D, such as the development of libraries, prebuild models, and templates to enable easier and faster programming and developments of software applications by early adopters in the industry, government agencies and Institutes of Higher Learning (IHLs). This project aims to develop hybrid quantum-classical algorithms and tools that will contribute to the libraries and pre-build models for supply chain use cases. Compared with classical techniques, we aim to enhance the performance of the Sample Average Approximation (SAA) and Simulation Optimization, that is verifiable in today’s NISQ quantum hardware, and apply these algorithms to supply chain risk management contexts. It is anticipated that these algorithms will achieve higher-quality and computationally attractive solutions over pure classical algorithms.
This is a project under the AI Singapore 100 Experiments (Research) Programme. BIPO has a unique advantage in payroll processing and saw an opportunity to build a tool anchoring on downstream pay outcomes as an enabler in strategic design of a rostering tool, that should not only feedback about staff costs, productivity, and preferences, but also feedback on skills-based job evaluation and design. BIPO’s client pool in labour-intensive industries such as logistics, retail (restaurants, shops), call centers, healthcare and hospitality have an acute need for a rostering tool that is based on roles, skills and pay. In this project, we combine constraint programming with adaptive large neighborhood search to generate rosters according to rostering requirement and maximizing the preferences of employees. We also cover the dynamic setting where reinforcement learning is applied to prescribe changes to the roster due to changes in the environment.
This research/project is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-100E-2022-098).
This project studies a way to efficiently bootstrap graph neural networks (GNN), a deep learning technique on graphs. A graph (also called network) contains different entities, which are further linked based on their interactions, to form complex networks. However, to achieve optimal performance, for each graph and analytics task, GNNs require a large amount of task-specific labels, which are example cases happened in the past. Such labels are often unavailable or expensive to collect in large scale. In contrast, label-free graphs (i.e., graphs without task-specific labels) are more readily available in various domains. To overcome this critical limitation, the project team turn to GNN pre-training, which can efficiently bootstrap GNNs using label-free graphs and only a small amount of task-specific labels, to capture intrinsic graph properties that can be generalized across tasks and graphs in a domain. Practical applications of this research include fraud detection and anti-money laundry on financial networks, container demand and shipping prediction on supply chain networks and talent match on job/skill graphs.
Digital wellbeing has arisen in public, governmental and policy discourse as a key measure of a person’s wellbeing through a healthy use of technology. This project aims to identify and measure digital wellbeing for digital readiness, inclusion and safety. Building on the Digital Wellbeing Indicator Framework (DWIF) developed by researchers at the NUS Centre for Trusted Internet and Community, this project will test, evaluate, and revise the DWIF by conducting both qualitative and quantitative analysis of data collected from local context (i.e, Singapore) and global contexts (ie, UK, US, China), with specific focus on mainstream job trends (digital readiness), minority disability access (digital inclusion) and women (digital safety).