Instructors today face increasing challenges in designing and delivering courses that effectively balance cognitive load, align with intended learning outcomes, and actively engage diverse learners. Traditional lecture slides and assessments often lack structure, personalization, and interactivity leading to passive learning, reduced motivation, and inconsistent achievement of educational goals. Furthermore, evaluating and improving teaching delivery remains largely subjective, with limited tools to analyze real-time classroom engagement or instructional clarity. This project offers AI-driven analysis and nudging mechanisms that align content with learning objectives through semantic and topical modeling, while embedding cognitive design strategies to manage learners’ mental load effectively. This research addresses these challenges by exploring how AI and analytics can enhance course content design, assessment, and delivery in a data-informed and scalable manner.
Eye tracking has emerged as a powerful, non-invasive window into neurological and ocular health, offering early biomarkers for conditions such as Parkinson’s disease, Alzheimer’s disease, and glaucoma. However, current RGB camera–based systems are bulky, power-intensive, and limited in their ability to capture the subtle, high-frequency micro-movements of the pupil that are critical for early diagnosis. To overcome these limitations, this project introduces SynapSee, a novel end-to-end wearable system that integrates event cameras with a multi-light active probing setup and computationally optimised algorithms for real-time, fine-grained pupil tracking. Unlike conventional eye trackers, event cameras operate at sub-microsecond latencies and asynchronously capture changes in light intensity, making them uniquely suited for high-velocity saccades and micro-movements. By exploiting “dark” and “bright” pupil effects through multi-light probing, SynapSee reduces extraneous event volume, enabling low-power and efficient processing. The system is further enhanced by hybrid spiking neural networks, adaptive sensing algorithms, and collaborative offloading to nearby devices, achieving both accuracy and energy efficiency. We will validate SynapSee in two exemplar clinical contexts: (i) detecting early neurodegenerative changes in Parkinson’s disease and (ii) identifying the onset of low-vision conditions such as macular degeneration, cataracts, and glaucoma. Longitudinal user and patient studies, conducted in collaboration with clinical partners, will establish discriminative ocular biomarkers and benchmark the system’s sensitivity and specificity. By enabling unobtrusive, continuous, and large-scale monitoring via smart glasses, SynapSee has the potential to transform preventive healthcare, offering clinicians powerful tools for early intervention and personalised disease management.
This project focuses on creating self-adaptive embodied agents capable of perceiving and planning in dynamic real-world environments, addressing current challenges like hallucinated plans, poor object tracking and inflexible execution. It employs retrieval-augmented planning, fine-grained environment understanding, and adaptive plan refinement using large multimodal models, validated through simulations and real robots in household tasks. Expected outcomes include new methods for adaptive planning and perception, a kitchen activity video dataset, and demonstrations in domestic scenarios, with broad applications in autonomous vehicles and assistive devices. The initiative aims to impact daily living and healthcare, especially eldercare in Singapore, aligning with national priorities to enhance AI leadership and support the Smart Nation agenda.
This proposal presents OMNICON, a comprehensive framework for generating realistic long-term multi-human motions with environmental context. By designing novel motion representations with generative solutions, OMNICON addresses critical challenges in long-term motion generation, multi-human interactions, and motion-with-context synthesis. Designed to advance applications across animation, gaming, virtual reality, and robotics, OMNICON leverages principles from physics and spatial reasoning to produce temporally consistent, contextually adaptive, and socially coherent motion sequences.
This project, conducted in collaboration with HTX, explores the use of Generative AI (GAI) to advance scientific computing and strengthen cloud security and cybersecurity resilience. This project looks to address deep research challenges in building intelligent, domain-specific automation. This is done by leveraging LLMs for computational chemistry, cloud configuration security and developing robust defence strategies to protect AI systems for use in mission-critical settings.
The increasing realism and accessibility of AI-generated and AI-edited videos threaten public trust, information integrity, and digital security. From misinformation campaigns to identity fraud, such manipulated content can cause real-world harm. Current detection systems are limited: they often focus narrowly on facial deepfakes, lack cultural and linguistic diversity, offer little interpretability, and struggle to adapt to new manipulation techniques. Additionally, most systems emphasize passive detection, without offering mechanisms for content traceability or origin verification. This bilateral research project between Singapore Management University (SMU) and Sungkyunkwan University (SKKU) aims to address these challenges by developing an interpretable, adaptive, and globally deployable deepfake detection and protection system, tailored to the languages, dialects, and socio-cultural contexts of Singapore and South Korea.
This research/project is supported by the National Research Foundation Singapore under the AI Singapore Programme (AISG Award No: AISG4-TC-2025-018-SGKR).
Singapore and New Zealand both use interRAI, a standardised assessment tool that supports the care of older adults. While interRAI is reliable and effective, integrating Artificial Intelligence (AI) presents a transformative opportunity to enhance healthy ageing and support older people to live longer, more independent lives. Our project brings together clinicians and researchers from the University of Otago, Singapore Management University, University of Canterbury, and University of Auckland. We will identify how to effectively integrate AI into the interRAI assessment, risk prediction, and care planning process to improve efficiency, consistency, and personalisation of care. We will achieve this with a three-pronged approach: 1. AI-assisted Assessments: By partially automating the currently manual interRAI process, we can reduce assessment time by 50% while improving accuracy. We will integrate structured health data and multimedia inputs to generate enriched assessments. 2. AI-enhanced Risk Prediction: We will develop predictive models for outcomes such as fracture risk, cognitive decline, and depression. These models will be embedded into interRAI software to support timely, targeted interventions. 3. AI-driven Personalised Care Plans: We will create dynamic, user-friendly care plans using a knowledge-based AI system enhanced by large language models. These plans will be tailored for patients, families, and clinicians, ensuring clarity and actionable guidance. With support from New Zealand’s Health NZ and ACC, and Singapore’s Agency for Integrated Care, Kwong Wai Shiu Hospital, NWC Longevity Practice, and 59 Socio-Techno Ventures, this initiative will augment existing systems to deliver scalable, cost-effective improvements to aged-care while growing our respective AI sectors.
This research/project is supported by the National Research Foundation, Singapore under its AI Singapore Technology Challenge – Leveraging AI for Healthy Ageing (AISG Award No: AISG4-TC-2025-015-SGNZ). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
The objective of the proposed project is to explore, in close collaboration with a local air transport hub, the development, validation and testing of an integrated set of models, algorithms, and tools that will support the Stand Assignment Process, considering impacts on the activities and behavior of passengers within the terminals. The project will also assess the likely impacts of a new AI-based system on the range of affected stakeholders, involve managers and staff in the design process, and train them in the use and management of this technology. Similar use cases with a ride-hailing service provider are being explored.
This project targets human capital development through AI-driven learning, with a focus on both childhood and adult learners. SMU researchers will develop AI-based tutoring technologies that enhance engagement and support during self-paced learning sessions. The project includes collaboration with organizations such as Yayasan Mendaki and SMU Academy. Key objectives are to capture multi-modal learner queries – visual, verbal, and gestural – using advanced sensors, and to build AI models for interactive question answering and generation in response to such queries. Focusing initially on mathematics problems, these models will also adapt the learning content (while formally assuring the correctness of auto-generated new content) based on assessments of learners’ current levels of competency and capability. The goal is to create new AI-powered online platforms to improve learning outcomes and personalize educational experiences across diverse learner populations.
This project focuses on enabling immersive AI-assisted human-robot collaboration in dynamic industrial environments such as aviation and marine maintenance. Assistive agents deployed in robots or other wearable devices must comprehend and respond to human-issued instructions involving spatial and temporal references, adapting their behaviour in real-time. SMU researchers aim to develop lightweight, energy-efficient AI models and pervasive systems that support comprehension of such multi-modal instructions – using visual, verbal, and gestural cues– and relate them to the 3D environment captured using sensors like RGB video, LIDAR, and neuromorphic cameras. Objectives include optimizing the execution of grounding tasks (associating instructions with specific real-world objects) for moving objects using video data and developing light-weight techniques for enhanced robotic spatial reasoning and planning (e.g., navigation to retrieve specific objects). These innovations will allow robotic agents to better interpret human commands and improve task execution, ultimately enhancing safety, productivity, and the adaptability of joint human-robot collaborative work in real-world settings.
Want to see more of SMU Research?
Sign up for Research@SMU e-newslettter to know more about our research and research-related events!
If you would like to remove yourself from all our mailing list, please visit https://eservices.smu.edu.sg/internet/DNC/Default.aspx