This project aims to provide a solid foundation for analysing AI systems as well as techniques used to facilitate the development of reliable secure AI systems. Central to the research is to develop an executable specification in the form of an abstract logical representation of all components that are used to build artificial intelligence, which subsequentially enables powerful techniques to address three problems commonly encountered in AI systems, namely, how to ensure the quality or correctness of AI libraries, how to systematically locate bugs in neural network programs, and how to fix the bug. In other words, this project aims to define a semantics of AI models, thereby forming a solid fundamental to build AI systems upon.
Text style transfer (TST) is the task of converting a piece of text written in one style (e.g., informal text) into text written in a different style (e.g., formal text). It has applications in many scenarios such as AI-based writing assistance and removal of offensive language in social media posts. Recent years, with the advances of pre-trained large-scale language models such as the Generative Pre-trained Transformer 3 (GPT-3) which is an autoregressive language model that uses deep learning to produce human-like text, solutions to TST are now shifting to fine-tuning-based and prompt-based approaches. In this project, we will study how to effectively utilize pre-trained language models for TST under low-resource settings. We will also design ways to measure whether solutions based on pre-trained language models can disentangle content and style.
The governance of artificial intelligence (AI) to mitigate societal and individual harm through ethics-by-design calls for equal attention to responsible data use before public trust can be conferred to AI technologies. Since trust is fundamentally rooted in community relationships, AI regulators seeking public acceptance toward AI innovation must attend to community-centric pathways to integrate data subjects’ voices in AI ethical decision-making. While traditional actuarial methods in financial audits can indicate a diverse range of evidence used to determine legal compliance, the researchers suggest that community interests and data subjects’ voices should not be absent in AI audit models. This research proposal will explore Singaporean (and Asian) perspectives on AI regulation to inform the motivations for using AI audits to rebuild public trust. Research analysis on the proposed scope and methodologies of AI audits will be followed by recommendations on the relevant skillsets for future AI auditors.
This project aims for learning efficient semantic segmentation models without using expensive annotations. Specifically, we leverage the most economical image-level labels to generate pseudo masks to facilitate the training of segmentation models. In the end, we will apply the resultant algorithms on tackling the remote sensing image segmentation in the challenging Continual, Few-shot, and Open-set Datasets.
For the Singapore leader the final audience is always larger than the physical audience at a particular venue. The importance of leadership oratory is not confined to live co-present audiences, as wider audiences have long viewed political and organisational leaders’ speeches via television (and radio) and the use of various recording technologies (VHS, DVD). Recently, it has become common for speeches to be broadcast live on the internet and/or disseminated via online video. As a result, they can be viewed by potentially vast and diverse national and global audiences at different times, in a wide variety of contexts, using a range of devices (Wenzel and Koch, 2018; Rossette-Crake, 2020). According to Rossette-Crake (2020), since the turn of the century, it has become standard practice for speeches to be written and delivered with this in mind, and this is leading to changes that are akin to the way in which political oratory was transformed by radio and television during the 20th century (Greatbatch and Clark, 2005). Building on these points, this research project seeks to establish which oratorical practices are associated with positive persuasive outcomes and inspire trust and a sense of group cohesiveness amongst members of diverse audiences. It will answer two questions: (1) What are the verbal and non-verbal practices associated with establishing trust and a sense of group cohesiveness among members of diverse audiences during live speeches, and (2) How do the diverse audience members perceive the impact of these practices and whether the themes of the speeches also influence their perceptions?
This project aims to design a hierarchical cross-network multi-agent Reinforcement-Learning-based trading strategy generator and examines governance framework for crypto asset markets.
This proposal contributes to Thrust 3 of the National Quantum Computing Hub (NQCH) that is focused on translational R&D, such as the development of libraries, prebuild models, and templates to enable easier and faster programming and developments of software applications by early adopters in the industry, government agencies and Institutes of Higher Learning (IHLs). This project aims to develop hybrid quantum-classical algorithms and tools that will contribute to the libraries and pre-build models for supply chain use cases. Compared with classical techniques, we aim to enhance the performance of the Sample Average Approximation (SAA) and Simulation Optimization, that is verifiable in today’s NISQ quantum hardware, and apply these algorithms to supply chain risk management contexts. It is anticipated that these algorithms will achieve higher-quality and computationally attractive solutions over pure classical algorithms.
This is a project under the AI Singapore 100 Experiments (Research) Programme. BIPO has a unique advantage in payroll processing and saw an opportunity to build a tool anchoring on downstream pay outcomes as an enabler in strategic design of a rostering tool, that should not only feedback about staff costs, productivity, and preferences, but also feedback on skills-based job evaluation and design. BIPO’s client pool in labour-intensive industries such as logistics, retail (restaurants, shops), call centers, healthcare and hospitality have an acute need for a rostering tool that is based on roles, skills and pay. In this project, we combine constraint programming with adaptive large neighborhood search to generate rosters according to rostering requirement and maximizing the preferences of employees. We also cover the dynamic setting where reinforcement learning is applied to prescribe changes to the roster due to changes in the environment.
This research/project is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-100E-2022-098).
This project is an interdisciplinary and multi-institute work package, led by SMU, making use of the Digital Urban Climate Twin (DUCT) results from the first Cooling Singapore 2.0 work package to examine the urban climate risks and impacts from environmental and physiological perspectives. The objectives include (a.) investigating where and who in Singapore will be affected by excessive heat from urbanisation and climate change, and (b.) examining if existing measures, such as vegetation cover, will have reduced effectiveness in minimising heat exposure under a warming climate. Results from this project will aid in assessment and future policy development towards urban warmth solutions in Singapore.
This project studies a way to efficiently bootstrap graph neural networks (GNN), a deep learning technique on graphs. A graph (also called network) contains different entities, which are further linked based on their interactions, to form complex networks. However, to achieve optimal performance, for each graph and analytics task, GNNs require a large amount of task-specific labels, which are example cases happened in the past. Such labels are often unavailable or expensive to collect in large scale. In contrast, label-free graphs (i.e., graphs without task-specific labels) are more readily available in various domains. To overcome this critical limitation, the project team turn to GNN pre-training, which can efficiently bootstrap GNNs using label-free graphs and only a small amount of task-specific labels, to capture intrinsic graph properties that can be generalized across tasks and graphs in a domain. Practical applications of this research include fraud detection and anti-money laundry on financial networks, container demand and shipping prediction on supply chain networks and talent match on job/skill graphs.