News & Events

Share          

MOE AcRF Tier 2 Grant Awards

Published on: 13-Aug-2019

Congratulations to the following professors for receiving MOE AcRF Tier 2 Grants



Assistant Professor Liu Weichen

Project: “Distanceless Communication Architecture for Many-Core Processors”

A many-core processor, or a many-core system, refers to a processor chip designed for a high degree of parallelism and containing a large number of processor cores (typically in tens, hundreds or even thousands). Network-on-Chip (NoC) has been developed as a prevailing communication architecture for the next-generation many-core systems due to the high flexibility and scalability. It provides a fundamentally revolutionized communication infrastructure that enables the easy and fast integration of thousands of different Intellectual Property (IP) cores and supports the simultaneous intercommunication among them in high bandwidth. NoC-based many-core processors have become a popular trend to satisfy the rapidly growing demands on high performance, low power and heterogeneous integration in both general-purpose computing and application/domain-specific implementations. However, driven by the boosting increase in the application complexity, especially in the areas of AI and big data analytics, more diversely integrated chips with different types of computing units and memory components are expected to accelerate these important applications with greatly enhanced energy efficiency in both large-scale data centers in the cloud and embedded edge computing devices in different form factors and real-time requirements. It is essential and extremely challenging to build the next-generation on-chip communication architecture as an integral part of the many-core system to provide ultra-high performance, high routing flexibility, and quality of service. This proposal will practically address the design and optimization problems for the many-core systems based on the emerging reconfigurable communication architectures (RCAs), which is the future generation communication infrastructure for chips. The proposed system-level design approaches are integral and indivisible software components and they together form a whole stack software solution to the RCAs towards higher inter-processor communication efficiency, application performance and system energy efficiency. The distanceless communication feature of the proposed smart communication architecture will advance the communication paradigm massively and make the system more adaptable to future heterogeneous designs and customized computing architectures. The proposed techniques will benefit the development of the increasingly complicated applications, such as deep learning and big data analytics. This project will greatly improve the state of the art in scalable on-chip communication architecture for emerging many-core systems.





Assistant Professor Li Yi

Project: “Scalable Compatibility Checking of Software Component Upgrades”

Today’s computer systems are large, complex, and ubiquitous. With the booming of intelligent systems such as smart phones, self-driving cars, and robotics, our lives are increasingly dependent on the correct operation of the software running them. These complex software systems are often built by integrating components created by different teams or even different organizations, and these components inevitably evolve over time. With little understanding of changes happened in external components, it is very challenging to make sure that the new upgrades do not introduce incompatibility issues to the existing software system. A recent study shows that 81.5% of the studied systems keep their outdated dependencies. When it comes to security patches, 69% of the interviewed developers were unaware of the vulnerabilities in the libraries that they use and therefore were reluctant to upgrade due to the extra efforts and added responsibilities. Similar situation is also faced by automakers when deploying over-the-air (OTA) updates for onboard computer systems. Ford and GM, among others, recently announced that some of their 2020 models will allow OTA updates that can upgrade a vehicle with new features, or even remotely fix faulty vehicle software. However, due to the safety-critical nature of auto software, when automakers start updating software remotely, any failure could be just as dangerous as if a mechanic made a faulty repair -- and it might affect thousands of vehicles at the same time.

Clearly, the lack of an automated mechanism for establishing confidence on the semantic-level compatibility of software upgrades has significantly hinder software development as well as software maintenance. Thus, an automated approach for processing, verifying, and validating component upgrades is a must to ensure correctness, stability, robustness and security of software systems. To upgrade or not to upgrade: that is the question. This proposal addresses this problem by investigating the impact of an upgrade to a component (which we refer to as a “library”) on its downstream consumers (which we refer to as “clients”) and developing a state-of-the-art automated upgrade compatibility checking framework. The framework takes a library component before and after the upgrade and decides whether the upgrade would affect a specific client component or not. The size of a library component can scale up to a complete third-party package, or even a collection of packages. We would also like to keep the analysis as generic as possible, which makes it applicable to software written in different programming languages and even binaries.

The proposed technique will be the first of its kind to reason about the compatibility of real program updates at the semantic-level (as opposed to syntactic-level analysis provided by compilers). The success of this project promises theoretical advances in the cutting-edge field of automated software analysis, along with practical developments towards disciplined, rigorous and secure software evolution management methodologies suitable for use in real-world development practices. The techniques proposed in this project also have wide applications in the software industry and have huge potentials for mass adoption and commercialization.





Assistant Professor Zhao Jun

Project: “Leveraging Blockchains for Secure and Privacy-Aware Distributed Machine Learning”

Machine learning has powered many artificial intelligence applications. In traditional machine learning, the training dataset is kept at one place. Since the training dataset may include users' sensitive information (e.g., medical records), protecting users' data privacy is crucial. To this end, distributed machine learning in the following form has already existed for years: different parties collaboratively train a machine learning model without the need for each party to share its data directly with any other party. In this process, typically an aggregator initiates the model and sends the initial parameters to each party, and each party uses its data and the parameters to compute an update (e.g., a gradient) which is sent to the aggregator. Upon receiving the updates from many parties, the aggregator averages the updates, uses it to update the parameters accordingly, and sends the new parameters to the parties to start the next iteration. However, the above approach of putting trust on the aggregator is risky and has the following shortcomings. First, parties have little control about how their data is used by the aggregator. For example, the aggregator can be malicious or compromised to use the updates received from parties for inferring parties' sensitive information, drop certain updates, or deny receiving the updates if payments have to be made to the parties. Also, the aggregator can be a single point of failure, meaning that the whole system will go down if the aggregator cannot function. Finally, the aggregator may become targets for denial-of-service attacks, where an adversary sends flooding garbage updates to overwhelm the aggregator. In view of these shortcomings, we propose to use a blockchain-based system to replace the aggregator. Our approach will achieve the following desired properties: better transparency, more accountability, better reliability, and less trust. First, blockchain eliminates the monopoly. Users' data will be better protected. Users have more transparency about how their data is used. Second, any malicious use of their data by an entity in the blockchain is held accountable. Third, blockchain as a decentralized approach is more reliable than the centralized setting. It avoids a single point of failure. Fourth, by using a blockchain, the trust placed by the aggregator is removed. Users do not need to trust entities in the blockchain. The technology is robust to perform the desired functionality without trust between entities. To achieve privacy protection, we adopt the formal notion of differential privacy (DP) to quantify privacy. Intuitively, by incorporating some noise, the output of an algorithm under DP will not change significantly due to the presence or absence of any single user's information in the dataset. Hence, an attacker will have difficulty in using the output of a DP-compliant algorithm to infer any user's information.





Associate Professor Sourav S Bhowmick

Project: “Towards Hallmark-aware Data-driven Target Combination Prediction for Cancer”

Interest in combination therapy arising from positive clinical experience in diseases, such as cancer, has sparked renewed research efforts in combination drugs. However, identifying good target combinations to facilitate combination therapy has been a long-standing and challenging problem. Although current approach of leveraging expert knowledge and high-throughput screening (HTS) of drug combinations has led to the development of several combination drugs, it is well-known that this is a tedious process that relies heavily on the individuals’ ability to link seemingly unrelated information. Hence, such approach is inefficient for solving non-linear problem such as identifying target combinations in a rational way. Particularly, it can be prohibitively expensive to screen all possible combinations due to the difficulty of choosing what molecular/cellular variable to measure. In silico approaches that can identify superior target combinations can address these limitations by providing a rational platform for prescreening of target combinations. They can complement HTS and high-content analysis (HCA) by offering exemplary efficiency and economy of scale for doing first-pass combinatorial exploration of effectiveness of target combinations, inexpensively covering a vast number of possibilities. Results of computational exploration can provide critical insights to guide experimental follow-up.

This proposal’s grand goal is to challenge traditional target combination identification process for cancer by seeking answer to the following fundamental question: (a) Why cannot we predict good target combinations automatically in silico by analysing publicly-available signalling networks, omics data, and cancer hallmarks information for different (sub)types of cancer, given that existing experimental approaches such as hypothesis-testing and HTS are too costly to assess all possible combinations? Specifically, we aim to devise efficient and high quality techniques that integrate publicly-available cancer signalling networks with omics data and cancer hallmarks to predict target combinations that may effectively modulate a set of disease nodes implicated in a specific (sub) type of cancer. Successful realization of this research may potentially improve the discovery phase of the current target-based drug discovery process in terms of efficiency of target discovery and efficacy of targets leading to the identification of superior drug combinations.

Back to listing