With the advent of e-commerce, users are presented with numerous alternatives to satisfy their everyday needs. Choosing from the available options generally entails the consideration of multiple, often conflicting aspects, the tradeoff among which is assessed differently by different users.
This project proposes PERFLEXO, a new methodology for multi-objective querying centred around three hard requirements, i.e., personalization, flexibility in the preference input, and output-size control. Past approaches have considered these requirements individually, but no existing work satisfies all three of them. On the technical side, the main contributions of the project will centre on PERFLEXO’s ability to process large option-sets (i.e., scalability) and produce shortlists in reasonable time (i.e., responsiveness).
AI models trained offline rely on the accessibility of all classes in training data. When they are updated online to learn new incoming data, they often bias to the patterns of new classes, and thus forget old ones. The problem is known as catastrophic forgetting. This project aims to tackle this issue by task-specific data augmentation. The augmentation for old classes is achieved by distilling from new or open-set data that contain the knowledge of old classes, e.g., shared contexts and sub-parts.
(This is a 6-month extension of the research collaboration with Fujitsu Ltd.) Under the Fujitsu-SMU Urban Computing and Engineering (UNiCEN) Corp Lab, SMU has undertaken the Digital Platform Experimentation (DigiPlex) project with Fujitsu. The project was carried out using the Digital Annealer (DA), a quantum inspired-technology inspired by Fujitsu. Through the DigiPlex project, certain challenges in solving constrained optimization problems using such technology, and promising methods on tuning of the underlying model parameters to improve run time performance, have been identified. This project aims at developing hyper parameter tuning methodology, machine learning techniques, operations research algorithms, and software tools to enhance quantum-inspired techniques for solving large scale real-world combinatorial optimization problems.
The world is experiencing a rapid transition towards a digital society. Although huge number of Internet of Things (IoT) devices are being deployed to provide accurate and real-time sensing and observation of the environment, security and privacy concerns are becoming one of the major barriers for large scale adoption and deployment of IoT. To that end, this project aims to provide IoT devices with privacy-aware authentication and flexible authorisation capabilities to build trust in IoT.
With the widespread adoption of drones in civilian, business, and government applications nowadays, concerns for breaches of safety, security, and privacy by exploiting drone systems are also rising to the highest national level. Malicious entities have used drones to conduct physical and cyber-attacks such as unauthorized surveillance, drug smuggling, armed use, etc. In this project, the research team aims to develop methods and tools for analysing a list of drones to audit drones for detecting anomalies such as malware, data leak, software bugs that could be exploited to conduct criminal/malicious drone activities. The research team will analyse at least five different drone-related criminal/malicious activities from their collaborator and demonstrate how ADrone can assist Drone forensic analysts with the detection of the root causes of activities.
In this Human-Computer Interaction research, the research team will design a novel system that addresses the low wage of online crowd work—also known as online gig-economy. By using knowledge from mechanism design in the economics literature, the research team will design and develop user interfaces through machine learning models that:
- Present information to encourage crowdsourcing requesters pay a fairer wage to online workers; and
- Use nudging messages and information visualization to persuade workers to submit high-quality work.
This research collaboration with IBM aims to develop the optimisation capabilities to build a cutting-edge resilient supply chain, leveraging data science to preserve the continuity and consistency of product supply and meet business obligations for product delivery and service to customers in the face of both short-term operational and longer-term strategic disruptions. In this project, the team seeks to leverage IBM’s relevant internal, supplier-provided, public and subscription data sources to improve operational decision-making capability to proactively anticipate and respond to disruptive events, and to enable resiliency evaluations for products, product families, or tiered supply networks.
The "SmartBFA 2.0" project aims to build a "Google Maps" equivalent for wheelchair users, so that they can find barrier-free access paths when navigating around Singapore. This objective is in line with Singapore's vision towards building a smart and inclusive city for everyone.
A major innovation of the research team's project is the incorporation of crowdsourced sensor inputs; in particular, they aim to solicit multi-modal data collected from a smartphone app to supplement the accessibility information that they have collected using specially-designed sensors. They also seek to collect user feedback, so as to make their system more useful to wheelchair users.
"Learning by doing” (LBD) is the phenomenon where a worker’s productivity rises with cumulative production experience. As LBD requires no additional investment in hiring or equipment investment, it is viewed by many as an important channel for firms to achieve productivity growth. Unfortunately, although conceptually simple and intuitive, the sources and enablers of LBD remain a mystery; as a result, even when a firm intends to facilitate LBD among its employees, it is not clear how to effectively achieve it. This challenge originates from the difficulty in quantifying and isolating the effects of LBD, and even in a few instances where the measurement of LBD effects (in terms of productivity) is made possible by natural events, these measurements are typically only at the aggregate level. In this project, the team aims to build a novel Big Data framework to measure the LBD effects for workers in the transport gig economy in Singapore. Their ambition is to measure LBD effects at not just the productivity level, which is easily tainted by other factors, but also at the skill level. They plan to achieve this by mining drivers’ microscopic movement traces and trip fulfilment (including both taxi and ride-hailing drivers), and quantify drivers’ skills in anticipating demands and competition from other drivers. Their research will provide a rare view into how big data can revamp the understanding of labor productivity and LBD effects at the individual level, and it will help policy makers and platform operators to come up with policies that are more effective in helping workers cope with competitions and sudden changes such as disruptions brought about by the COVID-19 pandemic.
In this multi-pronged initiative, we propose to build a framework for developing certifiable AI systems systematically, i.e. with the help of theories, tools, certification standards and processes. This is motivated by the many recently discovered problems on existing AI techniques and systems, e.g. adversarial samples, privacy and fairness issues, as well as the many ad hoc attempts on fixing them. For AI techniques to truly become part of a wide digital transformation across many industries, it is vital that we have foundational mechanisms to quantify the problems in AI models, and rectify the discovered problems.