By Alvin Lee
SMU Office of Research – Chatbots have become a common feature of modern life. They are deployed by organisations in a wide range of fields ranging from e-commerce to healthcare. Improvements in technology, allied with refinements made following users’ feedback, have made chatbots a cost-effective customer service tool.
However, chatbots have a set of words and terms built into them to understand what a user has typed, called the ontology. For example, the sentence ‘I would like to reserve a table for two at ABC restaurant’ will be easily understood by a restaurant reservation platform chatbot, and it might reply, ‘Date of reservation?’ or show a calendar on the user interface. However, it might not understand ‘Do you have a restaurant serving shakshuka?’ but would likely understand ‘I am looking for a Middle Eastern restaurant.’
To resolve this, periodic updates are done on the backend which are currently carried out manually by humans. ‘Shakshuka’ and any other lesser seen words that users had typed in are examined by trained personnel and added to the chatbot’s ontology, paying extra attention to the chatbot’s ability to correctly break down an utterance and identify its intent, slot, and value.
“Am I looking to book a table at a restaurant? Or am I making a complaint? Or am I asking for a delivery? The chatbot needs to understand my intent,” explains Liao Lizi, SMU Assistant Professor of Computer Science. “In order to be of service, the chatbot will then analyse the requirements. For example, if it is a reservation for: two people; for a certain kind of cuisine; in a specific area, etc. These are the slots.
“To interact with the database in the backend, a database query will be needed. Certain conditions must be filled in, in this case the cuisine could be Chinese or Italian or Thai, and the area could be the financial district or the name of a specific area or street. These are the values.”
She adds: “If we have a perfect chatbot, when the programme sees users make certain requests that cannot be fulfilled it will automatically update its internal system and handle this situation.”
Deep learning for intent, slot, and value
Professor Liao recently won an MOE Academic Research Fund Tier 2 grant to build a deep learning framework that can detect new intents, slots, and values with reduced human input, and update the ontology accordingly. In most situations, intent is rarely updated, slots are occasionally so, while values are most frequently modified.
For example, there are only so many things a user could ask for when interacting with a chatbot, which over time would be refined enough to handle almost all possible intents. Within the context of the restaurant platform example, the need for a COVID-19 vaccine requirement would be a rare update to intent, e.g., ‘Am I allowed to pick up an order if I am not vaccinated?’ The words ‘vaccine’ and its related words would be picked up by the chatbot as an intent to check on vaccination status, while the slots might be different makes of vaccines and the values could be ‘Pfizer’ or ‘Moderna’ or any other vaccine manufacturer.
Beyond this extraordinary situation, the usual updating of values within the context of a restaurant booking platform would pertain to the “closing of existing restaurants or the opening of new ones [where] a new restaurant would be a new ‘value’ in the system, and that would be added to the ontology,” explains Professor Liao.
Professor Liao tells the Office of Research that machine learning algorithms can distinguish between intents and slots fairly competently, but can sometimes confuse slots and values. Clustering, or grouping data points – in this case names of restaurants or types of cuisine – with similar meanings will be done to make the distinction.
“First, we need a good model to understand the meaning of an utterance. If the user says something, we must understand it clearly,” says Professor Liao, referring to the use of software such as the sematic parser SEMAFOR, pre-trained language models, or even large language models. “For example, if you type ‘Can you book a cheap restaurant for me?’ it will pick up ‘cheap’ as a requirement pertaining to cost, and ‘restaurant’ will be captured as an entity that the user is requesting help with.
“Another thing is making use of existing labeled datasets. We usually don’t start from zero. These models have some utterances assigned to certain slots already. When we get new data, we need to consider existing utterances and find what the new utterances mean. It’s semi-supervised learning and some knowledge transfer.”
She adds: “Suppose we had a slot ‘dress code’, which had values like: cocktail attire, business formal, smart casual, no dress code, etc. There are certain values that are closely related to them, such as fine dining, upscale casual, gastropubs or fast food, etc. These labelled data about ‘dress code’ would help to signal a new possible slot dining style for us.
“To make sense of all this, we first need to understand what the user is saying. Then we need to understand the relations between cocktail attire and fine dining, as well as the relations between fine dining to other values such as upscale casual, fast food, etc. We can then ground this information based on the existing slots, common sense knowledge and manage the clustering process, and we can update the system.”
Going forward
Professor Liao explains that the technology is fairly well-developed in the restaurant-booking space, but it is less so in areas such as online learning and health pre-consultation. While a fully automated system that can detect new intents, slots, and values on the fly may not be achieved in the near future, Professor Liao’s project aims to “catalyse a new thread of thought for building intelligent agents with self-expansion abilities.” Part of the project will explore human-in-the-loop learning techniques to create new slots, which underlines the not insignificant role humans still play in artificial intelligence.
“We still need humans,” Professor Liao observes. “In the past, ontology construction required humans to read lots of documents to determine what should be the slot or intent. In our project, the human role is changed. We have automatic algorithms to give suggestions, and the human becomes the judge or adjudicator.”
Going forward, would humans in such roles need to be technically trained, as they are now? Or could end users manage with basic training?
“This part is beyond this project,” Professor Liao says. “Humans have always expected chatbots to mimic human behaviour and adapt to human behaviour patterns. Ideally, human-machine interaction should be two-way: machines mimic human behaviour, and humans should understand machine behaviour as well.”
Back to Research@SMU November 2023 Issue
See More News
Want to see more of SMU Research?
Sign up for Research@SMU e-newslettter to know more about our research and research-related events!
If you would like to remove yourself from all our mailing list, please visit https://eservices.smu.edu.sg/internet/DNC/Default.aspx