News & Events

Share          

MOE AcRF Tier 2 Grant Awards

Published on: 26-Aug-2020

Congratulations to the following professors for receiving MOE AcRF Tier 2 Grants



  Nanyang Associate Professor Loy Chen Change

  Project: “Self-Supervised Visual Representation Learning”



The new wave of Artificial Intelligence (AI) is mainly brought by the emergence of deep learning. Modern deep learning systems heavily rely on the power of high-capacity deep neural network coupled with massive annotated data for learning effective representations. For instance, training an accurate deep network for face recognition requires billion of identity-labelled images. An autonomous driving system can only recognise lanes on the road after it is trained with a few hundred thousand of images with human-annotated lane markings. Though impressive results have been achieved, we are now trapped in a dilemma where there are hundreds of thousands manually labelling hours consumed behind each application and each percentage of accuracy gains. The data-hungry nature of deep learning severely hampers the deployment of this technology in many applications. Humans have a remarkable ability to gain useful knowledge without direct supervision. Through observing motions of objects, a human is capable of grasping the structure and part information of an object, and generalize the knowledge on unseen samples. "Can deep models learn meaningful visual representation without labelled data?" Learning high-level representations from raw information remains elusive. This question is now one of the most discussed topics in the AI community. The main objective of this project is to develop the new learning approaches for deep learning so that it can learn from a massive amount of videos without explicit annotations. In particular, we will explore novel self-supervised learning methods that can exploit the rich structural and temporal information from unlabelled videos for representation learning. This is possible by designing effective pretext tasks that derive pseudo supervisory signals from unlabelled data for network training. The learned representation can then be transferred to different downstream tasks such as image recognition, object detection, and image segmentation.





  Associate Professor Anwitaman Datta

  Project: “BlockChain I/O: A framework for cross-chain interoperability”



Blockchains are being tried out as an important building block for modern IT systems across a wide range of sectors. This has led to many blockchain deployments, which work independently. Facilitating these blockchains to interoperate with other blockchains, doing so beyond just swapping individual pieces of information or assets, but in a manner that supports distributed workflows, has the potential to amplify the value these blockchains provide as well as bringing down the barriers of migration to blockchain based systems.

To that end, there is a need to address the issues of identities, fine-grained access control and provide transactional guarantee to inter-chain actions. These in turn would rely on reliable and scalable communication primitives. These challenges are inherently intertwined and the emphasis of the project is to investigate and solve them in conjunction and in a mutually consistent manner.

This project will be carried out jointly in collaboration with SUTD.





  Professor Liu Yang

  Project: “Smart Safe and Robust Motion Control for Multi-Robot Systems”



Autonomous mobile robots play an important role in industry 4.0. Motion planning is one of the most important aspects of mobile robots. However, due to the dynamic and complex environment, real-time motion planning methods always suffer from high computation load, leading to low motion efficiency and limiting their applications in our life. Moreover, due to the broad attack surface of robots and environment openness, robots are easy to be attacked in practice, leading to security problems during their motion. Hence, it is significant and emergent to develop novel technologies for robot motion such that it is feasible to deploy autonomous robots in the real world to transfer our nation to a smart one. The project is to develop efficient, safe, and robust motion controllers for multi-robot systems. The objectives are: (1) Distributed and efficient approaches to generating motion for each robot using conventional planning methods and machine learning technologies. This objective aims to investigate how to leverage the online computation efficiency of machine learning models to facilitate the online motion generation of conventional methods. It contains two subtasks: the building of machine learning models to generate preliminary motion for a robot in arbitrary environments and the computation of safe and optimal motion based on the generated preliminary motion. (2) Approaches to robust control against intra-robot attacks, i.e., attacks to some components of a robot without compromise its basic motion functionality. In case that some devices of a robot, such as the sensors, are attacked but the basic motion functionality is intact, this objective aims to investigate how to detect and analyze whether some devices are under attack and how to mitigate such attacks to continue the attacked robot's motion. (3) Approaches to robust control against inter-robot attacks, i.e., attacks that can fully control a robot to be an adversarial one. Under such attacks, the robot in the system cannot perform correctly and will affect the motion of others. This objective aims to guarantee the motion of benign robots. It will focus on the detection and identification of malicious robots and the design of motion control algorithms for benign robots to avoid collision with malicious robots.



                       


Back to listing