The Spectre vulnerability has recently been reported to affect most modern processors. Attackers can extract information about the private data using a timing attack. This is an example of side channel attacks, where secure information flow through side channels unintentionally. How to systematically mitigate such attacks is an important and challenging research problem. This project proposes to automatically synthesise mitigation of side channel attacks using well-developed verification techniques. Given a system with design parameters which can be tuned to mitigate side channel attacks, this approach will automatically generate provable “secure” valuations of the parameters.
The objective of the project is to build a predictive machine learning model implemented on a quantum computer and a simulated quantum computer which has the potential to improve credit scoring accuracy. Credit scoring provides lenders and counterparties better transparency of the credit risk they are taking when dealing with a counterparty. Machine learning approaches allow for automated credit scoring feasible for a broad coverage of small companies. The current approaches rely on classical machine learning algorithms applied to broad datasets that combine company, accounting, and socio-economic information. Improving the learning algorithms is thus an important element to providing credit risk transparency.
This project will explore the use of quantum algorithms which cannot be implemented on classical machines today. This may open the route to a practical quantum supremacy for a financial application and create business advantages for the financial industry as quantum computing continues to improve.
The objective of this project is to provide new methods and systems for taxi queue management to better manage and predict taxi demand and supply at the Changi Airport. Models and systems that could more accurately predict the current and future taxi demand and drivers’ waiting times at different terminals would be developed, to predict taxi supply, passenger demand and driver waiting times at all taxi queues. Through deriving segmented and personalised models on how drivers react to incentives and related information, the project will also study how to best balance the supply and demand at all terminals in a workable & cost effective way. The team will determine whether there is a shortage of taxi supply either now or in the near term, and if so, construct real-time/pre-emptive plans to attract the right number of drivers based on individual driver’s reaction & behaviour model.
In addition, this project should provide useful and timely information to passengers (via different means such as digital display, mobile app alerts, SMS alerts, etc.) when there are long queues at taxi stands that cannot be resolved within the next 10 to 15 minutes. The system should provide alternative and available transport options nearby (i.e. MRT, private-hire cars, limousine, shuttle bus, etc.) with related way finding, costing and traveling time information.Mobile devices and mobile applications are increasingly important for people's daily life, and their security and privacy are raising more and more concerns. "Do you allow the app to access your contacts, photos, media, files, messages, …" becomes a common question faced by users when they start to use an app, but such permission control mechanisms put too much burden on users. Most users may not understand well the purposes of the accesses and the implications of granting permissions, and simply grant the permissions most of the time, leading to significant misuses of their privacy-sensitive data by apps. This proposal aims to build up the capabilities that enable automated, finer-grained, and customisable permission controls, which will promote a privacy ecosystem that keeps users aware, while reducing the burden on users, and pushes app developers to improve the privacy protection grades of their apps.
Mobile devices and mobile applications are increasingly important for people's daily life, and their security and privacy are raising more and more concerns. "Do you allow the app to access your contacts, photos, media, files, messages, …" becomes a common question faced by users when they start to use an app, but such permission control mechanisms put too much burden on users. Most users may not understand well the purposes of the accesses and the implications of granting permissions, and simply grant the permissions most of the time, leading to significant misuses of their privacy-sensitive data by apps. This proposal aims to build up the capabilities that enable automated, finer-grained, and customisable permission controls, which will promote a privacy ecosystem that keeps users aware, while reducing the burden on users, and pushes app developers to improve the privacy protection grades of their apps.
Android has become the most popular operating system for mobile devices, with millions of applications published for users to download and use. However, some of these applications may harbour security gaps that render them vulnerable to determined attacks from the Internet, or could have been intentionally constructed with malicious intent. This project aims to develop three novel and complementing technologies for the security analysis of Android applications, under different use case scenarios and infrastructure constraints.
The research aims to address security challenges arising from the usage of security-sensitive applications without trusting the phone’s operating system, which is known to be vulnerable to attacks due to its enormous code size and large attack surface.
Compilers are a key technology of software development. They are relevant for not only general purpose programming languages (like C/Java) but also many domain specific languages. Compilers are error-prone, especially concerning less-used language features. Existing compiler testing techniques often rely on weak test oracles which prevents them from finding deep semantic errors. The project aims to develop a novel specification-based fuzzing method named SpecTest for compilers. SpecTest has three components: an executable specification of the language, a fuzzing engine which generates test cases for programs in the language, and a code mutator which generates new programs for testing the compiler. SpecTest identifies compiler bugs by comparing the abstract execution of the specification and concrete execution of compiled program. Furthermore, with the mutator, SpecTest can systematically test those less-used language features.
Today’s malware analysis tools, especially those on kernel attacks, face the barrier of insufficient code path coverage to fully expose malicious behaviours, as that requires systematic exploration of kernel states. Although symbolic execution is the well-established solution for benign programs’ code coverage, it does not overcome that barrier because of its susceptibility to attacks from the running target under analysis and incapability of managing complex kernel execution. This project aims to innovate cutting-edge techniques to automatically and systematically generate code paths for maliciously-influenced kernel behaviours.
Control-Flow Integrity (CFI) enforcement is a promising technique in producing trustworthy software. This project focuses on function signature recovery, which is a critical step in CFI enforcement when source code is not available. Current approaches rely on the assumption of matching function signatures at caller and callee sites in an executable; however, various compiler optimisations violate well-known calling conventions and result in unmatched function signatures recovered. The project aims to design and implement an automatic system to produce CFI-enforced program executables.
Artificial Intelligence (AI) technologies have been under rapid development thanks to machine learning based on deep neural networks and their applications. Despite the exceptional performance of deep neural networks, these complex models are often beyond human understanding and thus work in a black-box manner. The research aims to address the problem of explaining AI for AI system designers and expert AI system users who are required to know how AI makes decisions.