In this Human-Computer Interaction research, the research team will design a novel system that addresses the low wage of online crowd work—also known as online gig-economy. By using knowledge from mechanism design in the economics literature, the research team will design and develop user interfaces through machine learning models that:
- Present information to encourage crowdsourcing requesters pay a fairer wage to online workers; and
- Use nudging messages and information visualization to persuade workers to submit high-quality work.
This research collaboration with IBM aims to develop the optimisation capabilities to build a cutting-edge resilient supply chain, leveraging data science to preserve the continuity and consistency of product supply and meet business obligations for product delivery and service to customers in the face of both short-term operational and longer-term strategic disruptions. In this project, the team seeks to leverage IBM’s relevant internal, supplier-provided, public and subscription data sources to improve operational decision-making capability to proactively anticipate and respond to disruptive events, and to enable resiliency evaluations for products, product families, or tiered supply networks.
The "SmartBFA 2.0" project aims to build a "Google Maps" equivalent for wheelchair users, so that they can find barrier-free access paths when navigating around Singapore. This objective is in line with Singapore's vision towards building a smart and inclusive city for everyone.
A major innovation of the research team's project is the incorporation of crowdsourced sensor inputs; in particular, they aim to solicit multi-modal data collected from a smartphone app to supplement the accessibility information that they have collected using specially-designed sensors. They also seek to collect user feedback, so as to make their system more useful to wheelchair users.
"Learning by doing” (LBD) is the phenomenon where a worker’s productivity rises with cumulative production experience. As LBD requires no additional investment in hiring or equipment investment, it is viewed by many as an important channel for firms to achieve productivity growth. Unfortunately, although conceptually simple and intuitive, the sources and enablers of LBD remain a mystery; as a result, even when a firm intends to facilitate LBD among its employees, it is not clear how to effectively achieve it. This challenge originates from the difficulty in quantifying and isolating the effects of LBD, and even in a few instances where the measurement of LBD effects (in terms of productivity) is made possible by natural events, these measurements are typically only at the aggregate level. In this project, the team aims to build a novel Big Data framework to measure the LBD effects for workers in the transport gig economy in Singapore. Their ambition is to measure LBD effects at not just the productivity level, which is easily tainted by other factors, but also at the skill level. They plan to achieve this by mining drivers’ microscopic movement traces and trip fulfilment (including both taxi and ride-hailing drivers), and quantify drivers’ skills in anticipating demands and competition from other drivers. Their research will provide a rare view into how big data can revamp the understanding of labor productivity and LBD effects at the individual level, and it will help policy makers and platform operators to come up with policies that are more effective in helping workers cope with competitions and sudden changes such as disruptions brought about by the COVID-19 pandemic.
In this multi-pronged initiative, we propose to build a framework for developing certifiable AI systems systematically, i.e. with the help of theories, tools, certification standards and processes. This is motivated by the many recently discovered problems on existing AI techniques and systems, e.g. adversarial samples, privacy and fairness issues, as well as the many ad hoc attempts on fixing them. For AI techniques to truly become part of a wide digital transformation across many industries, it is vital that we have foundational mechanisms to quantify the problems in AI models, and rectify the discovered problems.
This project aims at developing a software engine that enables planners to generate optimised routing plans based on a given set of data inputs subject to constraints. This work is an offshoot from the Collaborative Urban Delivery Optimisation (CUDO) project which was completed under the Fujitsu-SMU Urban Computing and Engineering (UNiCEN) Corp Lab, that was funded by the National Research Foundation (NRF). The project will customise and extend the functionalities of the CUDO engine developed at SMU, and integrate the engine into Y3 Technologies’ enterprise platform system via an Application Programming Interface (API).
Deep learning has enabled significant advances in applications involving image, text and audio data, in applications such as surveillance, machine translation and speech recognition. These successes have positioned deep learning as a promising approach to address critical challenges (reducing defects, design-time and down-time) in the advanced manufacturing and engineering (AME) domain. A major barrier to this broader application of deep learning is its need for large, labeled datasets to obtain good performance. Thus, the team aims to develop novel deep learning methods that can learn with 10 to 100 times less data as compared to current approaches. This project will enable deep learning to be used in a wider range of applications especially where data is scarce or expensive to obtain. The team will also demonstrate their methods using real-world data from the three identified AME applications (tentatively defect identification, predictive maintenance and circuit design) to show progress and applicability to the AME domain. This project is a collaboration between A*STAR’s I2R, SMU, SUTD, NTU and NUS.
Through the project, the team proposes an integrative program of fundamental research towards a vision in which every human will have such an AI assistant for daily life and work. The overall aim is to build conceptual understanding of human-AI collaboration, to develop representations, models, and algorithms for situated assistance, and to integrate them in an experimental device platform for evaluation. The research program consists of five thrusts: (i) situated language communication with reasoning, (ii) visual-linguistic situation understanding, (iii) human collaboration modeling, (iv) robust situated teaming, and (v) the integrative showcase project The “Other Me”, which develops and evaluates the experimental platform Tom – a wearable situated AI agent that assists the human in creating novel artifacts or diagnosing faults. The research aims to empower a new generation of AI assistants for situated, just-in-time, and federated assistance. They will reinvent the relationship between the human and the device in our daily life and work. This research/project is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-RP-2020-016). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
As AI becomes ubiquitous, people’s trust in AI is actually dwindling. The key barrier to adopting AI is no longer technical in nature, but more about gaining stakeholders’ trust. Federated Learning (FL), in which training happens where data are stored and only model parameters leave the data silos, can help AI thrive in the privacy-focused regulatory environment. FL involves self-interested data owners collaboratively training machine learning models. In this way, end-users can become co-creators of AI solutions. To enable open collaboration among FL co-creators and enhance adoption of the federated learning paradigm, this project aims to develop the Trustworthy Federated Ubiquitous Learning (TrustFUL) framework, which will enable communities of data owners to self-organize during FL model training based on three notions of trust: 1) trust through transparency, 2) trust through fairness and 3) trust through robustness, without exposing sensitive local data. As a technology showcase, we will translate TrustFUL into an FL-powered AI model crowdsourcing platform to support AI solution co-creation.
In safety-critical applications (e.g., where human lives are at stake), it is crucial for humans to be well-trained to handle expected and unexpected scenarios of varying complexity. Paramedics frequently respond to life-or-death situations, and they must be trained to handle expected and unexpected situations with respect to patient condition and response to treatment effectively. Manned vessels in maritime traffic operate in environments with many other vehicles, and humans must be trained to safely navigate varied situations. A wide range of critical activities in crime response, healthcare, defence, and construction also require training to improve safety. Through this project, the team intends to develop and assess Explainable and tRustworthy (ExpeRt) AI (or Agent) Training Programs (ATPs) with feedback interfaces to adaptively train human(s) for safety-critical applications with showcase projects on emergency response and maritime navigation. The ExpeRt ATPs will generate safe, unexpected scenarios that adapt to observed learner deficiencies and yet provide fair and comprehensive coverage of all cases/situations. This research/project is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-RP-2020-017). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.