By Alistair Jones
SMU Office of Research Governance & Administration – Artificial Intelligence (AI) is touted as the most transformative technology of the 21st century. Investment in the sector is at staggering levels and the race is on as the big digital players compete to come up with the next advance.
Present generative AI is based on large language models (LLMs) that are trained on vast amounts of data to see patterns and make predictions. Chat GPT is a popular example, a friendly chatbot that can explore ideas and solve problems, but with no intention of its own. Now LLMs are moving out of chat boxes into operational control rooms.
"AI systems are increasingly making real decisions such as planning routes, scheduling resources or controlling workflows," says Zhiguang Cao, an Assistant Professor of Computer Science at Singapore Management University (SMU).
"But they optimise for efficiency or performance without understanding social responsibility, risk or trust. Current safety checks often happen only after decisions are made, which might be too late."
Professor Cao is the Principal Investigator of a three-year research project, funded under the AISG Research and Governance Joint Grant Call, to develop VISTA (a Value-Informed Safety and Trust Architecture for Autonomous LLM agents), which will embed psychologically grounded values directly into every step of LLM decision-making and operationalisation.
"VISTA is needed to ensure AI systems can monitor and regulate their behaviour while they are making decisions, not after deployment," Professor Cao says.
"It will introduce continuous, real-time monitoring and correction of an AI agent’s behaviour as it reasons and plans. This shifts AI trustworthiness or safety from a reactive model to a proactive one."
Inside the loop
VISTA will be one of the first systems to embed social values directly into the AI decision process, rather than treating ethics as an external filter. Does that mean we are at the forefront of developing a moral compass for AI?
"VISTA does not impose morality in a philosophical sense, but it will provide a measurable and transparent value signal that guides AI behaviour," Professor Cao says. "In that sense, it will function like a practical 'moral compass' that keeps AI decisions socially aware and accountable during operation."
So, what are the five psychometric value factors that VISTA will embed into an LLM-based agent?
"The five factors are social responsibility, risk-taking, rule-following, self-confidence and rationality. They were chosen because large-scale psychometric studies show these dimensions consistently explain how both humans and AI models behave in complex decision tasks. Together, they capture safety, compliance, and reasoning quality in a balanced way," Professor Cao says.
"Five is a practical and evidence-based starting point, not a hard limit. It provides enough expressiveness to capture meaningful value trade-offs without making the system slow or unstable. The architecture itself can support more dimensions if needed in the future."
And can the values be easily adjusted, or even substituted with other values?
"Yes. VISTA is modular by design. The value definitions, thresholds and even the value factors themselves can be adjusted to suit different domains and regulations, as long as they are well-defined and measurable," Professor Cao says.
"VISTA is designed to plug into existing LLM-based agents, not replace them. Unlike typical add-ons that check outputs after the fact, VISTA will sit inside the reasoning loop, observing partial decisions and intervening early when risks appear. That will make it an architectural upgrade rather than a superficial wrapper."
Real-time monitoring
Given the system is so adjustable, will there be safeguards to prevent VISTA being repurposed with covert value manipulation by malicious agents?
"VISTA will include tamper-resistant logging, traceable interventions and human-override mechanisms," Professor Cao says. "Any value adjustment or corrective action will be recorded and auditable, making covert manipulation difficult to hide. Governance oversight will be built into the system design, not added later."
VISTA also includes something called VISTA-Audit.
"VISTA-Audit is a real-time monitoring service that continuously checks whether an AI agent’s decisions stay within acceptable value boundaries. It will provide early warnings, detailed logs and trigger corrective actions when risks emerge. You may think of it as a live safety dashboard for autonomous AI."
It all sounds quite straightforward but, in fact, embedding social values into LLM frameworks is anything but simple.
"Social values are multi-dimensional, context-dependent and sometimes conflicting," Professor Cao says. "Traditional AI training pipelines are optimised for performance metrics, not nuanced trade-offs like safety versus efficiency. Embedding values requires new representations, new objectives and real-time control mechanisms.
"Most existing approaches compress complex social values into a single reward score or rely on static rules. They are often expensive to retrain, slow to react and blind to value drift during long decision processes."
Running value-aligned control efficiently in real time, with negligible latency, could deliver considerable real-world impact.
"It’s a breakthrough because decisions can be corrected before they cause harm, not after. VISTA will achieve this by using lightweight value encoders and fast auditing components that operate at near token-generation speed," Professor Cao says.
Towards trustworthy AI systems
Decision-making by LLMs can be influenced by inherent biases, which can even include a tendency to avoid action. Can VISTA address this?
"VISTA will explicitly measure behavioural tendencies like risk avoidance or overconfidence instead of letting them remain hidden. By making these tendencies visible and controllable, the system can rebalance behaviour dynamically rather than amplifying bias unintentionally," Professor Cao says.
Professor Cao comes to the project with earlier work that focused on academic research and real-world optimisation systems, particularly concentrating on the solution quality for problems in logistics and decision-making AI.
"VISTA will be built on that foundation by extending high-performance AI systems towards trustworthy and socially responsible deployment," he says.
Back to Research@SMU February 2026 Issue
See More News
Want to see more of SMU Research?
Sign up for Research@SMU e-newslettter to know more about our research and research-related events!
If you would like to remove yourself from all our mailing list, please visit https://eservices.smu.edu.sg/internet/DNC/Default.aspx

