Mobile devices and mobile applications are increasingly important for people's daily life, and their security and privacy are raising more and more concerns. "Do you allow the app to access your contacts, photos, media, files, messages, …" becomes a common question faced by users when they start to use an app, but such permission control mechanisms put too much burden on users. Most users may not understand well the purposes of the accesses and the implications of granting permissions, and simply grant the permissions most of the time, leading to significant misuses of their privacy-sensitive data by apps. This proposal aims to build up the capabilities that enable automated, finer-grained, and customisable permission controls, which will promote a privacy ecosystem that keeps users aware, while reducing the burden on users, and pushes app developers to improve the privacy protection grades of their apps.
Android has become the most popular operating system for mobile devices, with millions of applications published for users to download and use. However, some of these applications may harbour security gaps that render them vulnerable to determined attacks from the Internet, or could have been intentionally constructed with malicious intent. This project aims to develop three novel and complementing technologies for the security analysis of Android applications, under different use case scenarios and infrastructure constraints.
The research aims to address security challenges arising from the usage of security-sensitive applications without trusting the phone’s operating system, which is known to be vulnerable to attacks due to its enormous code size and large attack surface.
Compilers are a key technology of software development. They are relevant for not only general purpose programming languages (like C/Java) but also many domain specific languages. Compilers are error-prone, especially concerning less-used language features. Existing compiler testing techniques often rely on weak test oracles which prevents them from finding deep semantic errors. The project aims to develop a novel specification-based fuzzing method named SpecTest for compilers. SpecTest has three components: an executable specification of the language, a fuzzing engine which generates test cases for programs in the language, and a code mutator which generates new programs for testing the compiler. SpecTest identifies compiler bugs by comparing the abstract execution of the specification and concrete execution of compiled program. Furthermore, with the mutator, SpecTest can systematically test those less-used language features.
Today’s malware analysis tools, especially those on kernel attacks, face the barrier of insufficient code path coverage to fully expose malicious behaviours, as that requires systematic exploration of kernel states. Although symbolic execution is the well-established solution for benign programs’ code coverage, it does not overcome that barrier because of its susceptibility to attacks from the running target under analysis and incapability of managing complex kernel execution. This project aims to innovate cutting-edge techniques to automatically and systematically generate code paths for maliciously-influenced kernel behaviours.
Control-Flow Integrity (CFI) enforcement is a promising technique in producing trustworthy software. This project focuses on function signature recovery, which is a critical step in CFI enforcement when source code is not available. Current approaches rely on the assumption of matching function signatures at caller and callee sites in an executable; however, various compiler optimisations violate well-known calling conventions and result in unmatched function signatures recovered. The project aims to design and implement an automatic system to produce CFI-enforced program executables.
Artificial Intelligence (AI) technologies have been under rapid development thanks to machine learning based on deep neural networks and their applications. Despite the exceptional performance of deep neural networks, these complex models are often beyond human understanding and thus work in a black-box manner. The research aims to address the problem of explaining AI for AI system designers and expert AI system users who are required to know how AI makes decisions.
This project aims to provide secure remote access control over identity information of Internet-of-Things (IoT) devices to prevent sensitive information from being stolen.
Software development today relies on Application Programming Interfaces (APIs), and identifying suitable APIs to use can directly influence the success or failure of a software development project. While a large number of third-party APIs are available on the internet, selecting suitable APIs for a project can be challenging. This research proposes a big-data, deep-learning, and exploratory-search approach for API recommendation called DeepSense to improve software developers’ productivity, and the success of this project will benefit the software engineering and artificial intelligence research community, software developers, and institutions developing IT solutions.
This project aims to optimise response of fire engines and ambulances to medical and fire incidents in a prioritised manner.