Most conversational systems today are not very good at adapting to new or unexpected situations when serving the end user in a dynamic environment. Models trained on fixed training datasets often fail easily in practical application scenarios. Existing methods for the fundamental task of conversation understanding rely heavily on training slot-filling models with a predefined ontology. For example, given an utterance such as “book a table for two persons in Blu Kouzina,” the models classify it into one of the predetermined intents book-table, predict specific values such as “two persons” and “Blu Kouzina” to fill predefined slots number_of_people and restaurant_name, respectively. The agent’s inherent conversation ontology comprises these intents, slots, and corresponding values. When end users say things outside of the predefined ontology, the agent tends to misunderstand the utterance and may cause critical errors. The aim of this project is to investigate how conversational agents can proactively detect new intents, values, and slots, and expand their conversation ontology on-the-fly to handle unseen situations better during deployment.
This project will create new knowledge derived from historical sources to benefit the academic and scientific communities of Singapore in understanding long-term regional rainfall variability. This benefits Singapore by revealing long-term trends and extremes, critical to water security and climate-change preparedness now, and in the future. This benefits society by helping scholars and government in managing water-related risk. Principal Investigator: Holly Yang
The objective of this proposed research is to examine the relationship between managers’ incentives to meet or beat earnings expectations and employee mental well-being. Using data collected from a mental health mobile app, the team will explore whether and how pressure to meet firms’ financial reporting objectives affect the mental health of lower level employees and their tendency to engage in misreporting.
In this proposed study, the team aims to examine two research questions related to Environment, Social and Governance (ESG) reporting divergence. First, the team will investigate the negative consequences of ESG reporting divergence in the absence of mandatory ESG reporting requirements. Second, they will examine the benefits of mandatory ESG reporting requirements for capital markets. In answering these questions, they aim to provide important policy implications on whether standardised ESG reporting improves the comparability of ESG reporting across firms globally and enhances the usefulness of ESG information for capital market participants.
This project aims to understand how Singaporeans respond to the current state of socioeconomic diversity (SED) and whether it shapes class relations. This will provide important insights into how future changes in SED may affect Singapore’s social compact. Critically, understanding how SED affects class relations will inform the targets of social intervention for mitigating Singapore’s emerging class divide.
State-of-the-art visual perception models in autonomous vehicles (AV) fail in the physical world when meeting adversarially designed physical objects/environmental conditions. The main reason is that they are trained with discretely-sampled samples and can hardly cover all possibilities in the real world. Although effective, existing physical attacks consider one or two physical factors and cannot simulate dynamic entities (e.g., moving cars or persons, street structures) and environment factors (e.g., weather variation and light variation) jointly. Meanwhile, most defence methods like denoising or adversarial training (AT) mainly rely on single-view or single-modal information, neglecting the multi-view cameras and different modality sensors on the AV, which contain rich complementary information. The above challenges in both attacks and defenses are caused by the lack of a continuous and unified scene representation for the AV scenarios. Motivated by the above limitations, this project firstly aims to develop a unified AV scene representation based on the neural implicit representation to generate realistic new scenes. With this representation, we will develop extensive physical attacks, multi-view & multi-modal defenses, as well as a more complete evaluation framework. Specifically, the project will build a unified physical attack framework against AV perception models, which can adversarially optimize the physical-related parameters and generate more threatening examples that could happen in the real world. Furthermore, the project will build the multi-view and multi-modal defensive methods including a data reconstruction framework to reconstruct clean inputs and a novel ‘adversarial training’ method, i.e., adversarial repairing that enhances the robustness of the deep models with guidance of collected adversarial scenes. Finally, a robust-oriented explainable method will be developed to understand the behaviors of visual perception models under physical adversarial attacks and robustness enhancement.
This project aims to study the implications of Dhaka’s new mass rapid transit system, the Dhaka Metro Rail (DMR), on the distribution of socioeconomic activity and mobility within the city. It will examine the impacts on economic activity, pollution and welfare within the city, and its findings may inform policymakers in Bangladesh and beyond on the impacts of public transportation and importance for sustainable development.
(This is additional funding to SMU for the existing research project.) This project aims to study how better care options can be provided and developed for the local community. The first study will centre around the awareness and preferences of Singapore residents aged 50 – 76 regarding Assisted Living, with a set of survey questions to be designed and fielded through the Singapore Life Panel. Other focus areas will be developed over the course of this 2-year collaboration based on up-and-coming topics as they emerge.
The TAICeN project is focused on developing AI-based solutions for cybersecurity tasks, with two primary research areas: AI for Cybersecurity and Trustworthy AI. Part 1, AI for Cybersecurity, investigates advanced AI technologies for defending against cyber threats such as malware detection, intrusion detection on government cloud systems, crypto attribution, and insider attack attribution. Part 2, Trustworthy AI, ensures the security, robustness, and explainability of the AI-based solutions developed in Part 1. Part 1 will be primarily conducted by NTU, NUS, and BGU, in collaboration with local government agencies. Part 2, led by SMU, explores three key research topics: AI Security, Robustness, and Explanation. AI Security focuses on techniques to mitigate inference attacks, model extraction attacks, adversarial attacks, and poisoning attacks on the solutions. AI Robustness aims to provide quality assurance for AI systems by offering methods to evaluate, debug, and improve AI systems. AI Explainability enables human comprehension and reasoning of the AI decision-making process, which is crucial when predictions have national or safety-critical implications. By addressing both research challenges, the TAICeN project aims to develop effective and trustworthy AI-based solutions for cybersecurity that can keep pace with cybercriminals, automate threat detection, and defend against attacks. Specifically, Prof. Xie Xiaofei and Prof. Sun Jun will focus on the AI Robustness and AI Explainability work packages together with NTU and the relevant partner institutions to demonstrate their research results together with the relevant translation partners.
The project aims to collect data to assess the efficacy of cool paints in mitigating the Urban Heat Island (UHI) effects in schools through the deployment of micro-scale sensors.
Want to see more of SMU Research?
Sign up for Research@SMU e-newslettter to know more about our research and research-related events!
If you would like to remove yourself from all our mailing list, please visit https://eservices.smu.edu.sg/internet/DNC/Default.aspx