Recent advancements in AI make it possible to process large amounts of medical imaging data and replicate clinicians’ decisions with competitive performance. However, the adoption of AI in clinics has been challenging due to several issues, such as clinicians’ inability to understand how AI operates to trust and adopt it in practice. In this project, we aim to develop and evaluate a human-AI collaborative system and practices for improving collaboration between clinicians and AI in the context of head and neck cancer screening. This system learns representations of clinical videos to identify urgent referral cases and generates AI explanations on interactive visualizations to improve clinicians’ understanding of AI and their practices. After implementing the proposed system, we will conduct user studies to evaluate the effectiveness of the system.
The main goal of this project is to develop new technologies to test how well the perception module of an autonomous driving system functions and understand how perception errors impact other parts of the system, like decision-making. The project team aims to create innovative solutions to evaluate the performance of the perception module in autonomous driving. Throughout the project, the team will utilize software testing technologies, machine learning technologies, formal methods, and evolutionary algorithms to explain and develop their methods. The resulting technologies will contribute to improving the safety and security of autonomous vehicles from their development phase to actual use on the road.
The objective of the proposed project is to explore, in close collaboration with a local air transport hub, the development, validation and testing of an integrated set of models, algorithms, and tools that will support the Stand Assignment Process, considering impacts on the activities and behavior of passengers within the terminals. The project will also assess the likely impacts of a new AI-based system on the range of affected stakeholders, involve managers and staff in the design process, and train them in the use and management of this technology. Similar use cases with a ride-hailing service provider will also be explored.
This research project aims to leverage Virtual Reality (VR) and Artificial Intelligence (AI) to improve public speaking skills through immersive, real-world scenario simulations. The project seeks to develop a VR system with AI-driven avatars that respond dynamically to a presenter’s body language and speech, enhancing the learning experience by providing interactive and personalized feedback. It addresses the scalability and resource limitations of traditional public speaking training by offering a virtual environment where students can practice and refine their skills without the need for a physical audience. The research will explore PresentationPro's effectiveness in helping students achieve learning outcomes in university public speaking programs and equip them with key skills for the future workplace. By incorporating advanced AI, machine learning, and VR technologies, PresentationPro aims to provide a realistic and accessible virtual practice experience that reduces public speaking anxiety and improves performance. The project will be assessed through pilot studies focusing on learning outcomes, system usability, and the immediate applicability of training in real-world settings.
The objective of this project is to enhance students’ comprehension, retention, and overall learning outcomes in programming by leveraging AI-enabled PromptTutor. It aims to design an AI-enabled intervention that prompts students to reflect on their completed tasks, address doubts in their reflections, and provides additional learning resources in a personalised and timely manner.
In this digital age, advancements in artificial intelligence (AI) have brought about both great opportunities and significant challenges. One of these challenges revolves around the protection of personal data, particularly digital images, which can be exploited by AI technologies. The proposal focuses on addressing these issues by developing solutions that can safeguard the digital rights of individuals and protect their creations from potential misuse by AI technologies. It offers a 'cloak of invisibility' to your digital images, rendering them unexploitable by AI while retaining their visual appeal for human observers. The project aims to return control to the individuals, ensuring the protection of their art and their privacy in the digital world.
This research/project is supported by the National Research Foundation, Singapore under the AI Singapore Programme (AISG Award No: AISG3-GV-2023-011).
ZEASN Technology is a global leader in smart TV solutions since 2011, and it is headquartered in Singapore with a strong global presence. ZEASN's flagship product, Whale OS, powers 90 million devices globally for over 300 brands. The collaborative research between SMU and ZEASN Technology Pte Ltd is dedicated to developing an advanced Web 3.0 creative media content ecosystem. Emphasizing critical aspects like tokenomics, incentive design, and privacy-enhancing computation, the project’s our primary goal is to construct a future-proof digital framework that is user-friendly, secure, and maximizes user participation, privacy, and profit. Anticipated outcomes include a robust, efficient, and scalable Web 3.0 creative media content ecosystem, maintaining user privacy while fostering a dynamic, tokenomics-driven creative space. This comprehensive approach seeks to revolutionize how creative media is created, shared, and monetized, empowering users and content creators in the digital era. Leveraging combined expertise from economics, computer science, and digital media, the team we aim to design an ecosystem aligned with the values of the Web 3.0 vision: decentralized, user-centric, and privacy-preserving. An early harvest of this collaboration is addressing key challenges in the century-old film industry, with plans for a Web3-powered virtual cinema on ZEASN's worldwide Whale OS CTVs, aiming to decentralize film distribution and monetization in a transparent and rewarding fashion.
The global fintech landscape is undergoing a pivotal shift at its core, driven in part by advanced AI techniques. This project aims to: (i) understand the inner workings of diverse investment systems to assess their transaction patterns; (ii) create algorithms that decode fintech data, offering insights and aiding in market behavior predictions; and (iii) leverage optimization and AI methods to enhance trading and transaction systems.
This project, led by A/Prof Iris Rawtaer (SKH) aims to utilise multimodal sensor networks for early detection of cognitive decline. Under this project, the SKH and NUS team will oversee the project operations, screening recruitment, psychometric evaluation, data analysis, data interpretation, reporting and answer of clinical research hypotheses. The SMU team will collaborate with SKH and NUS to provide technical expertise for this study by ensuring safe implementation and maintenance of the sensors in the homes of the participants, provide the sensor obtained data to the clinical team and apply artificial intelligence methods for predictive modelling.
This project is set to advance the security landscape of emerging technologies in Web 3, including pattern and model-based fraud detection and knowledge graph-based reasoning, in order to address the various issues and chaos in the Web3 domain and establish a comprehensive set of compliance standards.