News & Events

Share        

MOE AcRF Tier 2 Grant Awards

Published on: 09-Mar-2017

Congratulations to the following professors for receiving MOE AcRF Tier 2 Grants

 

Associate Professor Luo Jun

Project: “VIBSens: Visible Light Building Sentience for Lightweight & Efficient Smartness”

With the ever-growing interests in making our working and living environment more adaptive to human requirements with less carbon footprints, both academia and industries have been putting in tremendous efforts to make things around us “smart”, ranging from individual homes to the whole nation. Within this chain of smart facilities, smart building appears to be a key, as it is, on one hand, a collection of homes and offices, and on the other hand, it occupies the majority fraction of human presenting spaces of a nation. Making buildings smart is essentially requiring a building to be equipped with sufficient “sentience” in order to be aware of its current status and the needs raised by its (human) users. This is similar to the fact that human intelligence is largely dependent on their sentience: without the information collected by the sense organs, any form of intelligence or smartness would be impossible as data and information are their sources.

A naïve approach towards building sentience, being practiced for many years in a relatively limited scale by industries, is to equip a building with various sensors. For instance, passive infra-red (PIR) sensors for occupancy detection, motion sensors for gesture and behaviour recognition, and proximity sensors for inferring presence. Such an approach is far from practical, as it entails a complicated building management system that would never be sufficiently reliable to serve a practical application, while it incurs extra energy consumption that may offset any benefit in energy saving potentially offered by such a sensing infrastructure. One recent trend aims to provide a unified wireless sensing ability by relying on the Wi-Fi infrastructure, assuming human presence and behaviours may cause fluctuations in the wireless signal spectrum. Nevertheless, while many theoreticians are questioning the feasibility of this approach and suspecting that the limited successes achieved by a few experimental systems are out of pure luck, the ability of Wi-Fi based wireless sensing in handling multi-user scenarios is apparently questionable and the availability of an Wi-Fi infrastructure is not and can never be so pervasive to cover all the indoor spaces.

An even more recent trend is to make use of visible light (rather than wireless signal) as the sensing media; obviously, the pervasiveness of this media is perfectly guaranteed. Unfortunately, all the existing proposals (very small in number) demand a light sensing infrastructure to sense the fluctuations in visible light field, so as to deliver the sentience required by a building (its indoor spaces). Noticing that the majority of the (indoor) visible lights are produced by the inherent lighting systems pertaining to a building and that there is an inevitable trend of revamping the outdated lighting infrastructure into a Light-Emitting Diode (LED) based one, our proposal innovates in realizing a special type of building sentience that piggybacks on the lighting systems virtually presenting in every indoor space. Specifically, assuming a full LED lit indoor space, we slightly re-engineer the lighting system (the drivers of individual LED luminaires) so that each LED luminaire can serve as both an information transmitter and a data collector, all relying on the visible light as the media. Building upon this innovative idea, we aim to construct VIBSens (Visible lIght Building Sentience) as a comprehensive visible light driven communication and sensing system, as well as to investigate several fundamental and yet practical aspects of visible light enabled building sentience:

1) Visible Light Enabled Communication: Although visible light communication (VLC) has been a research topic for many years, its theories are still far from practice. Our intention is to study and build a practical VLC platform that offers sufficient data rate so as to partially replace Wi-Fi.

2) Visible Light Enabled Occupancy Detection: While visible light sensing (VLS) enabled gesture recognition has been proposed very recently, no one has been able to detect human occupancy indoors through VLS, so our project will push VLS to a new frontier.

3) Visible Light Enabled Tracking and Localization: Building upon the results obtained under VLS, we proceed to tackle more sophisticated problems such as tracking user locations relying on visible light.

4) Further Extensions beyond Indoor Sensing: We finally plan to extend our indoor solutions to semi-indoor or even outdoor environments. For example, the VLS occupancy detection may have the potential to be applied to SMRT platforms.

Note that, though a CCTV based infrastructure can serve part of the purpose of our project, it entails a huge amount of processing power to deal with the videos taken during a very short period. It also, while it severely violates human privacy and hence was rejected by many indoor facilities. Our approach directly makes use of the light (rather than images), so it demands a much lower processing power and incurs no concerns on privacy at all, making VIBsens potentially a very practical solution.

Our proposal is the first comprehensive investigation on the realizing and application of the innovative idea of “light sensing light”. As a result, the outcome of our project would fundamentally expand the research scope of smart building research and practice, while leading to solutions with practical significance that can be readily deployed and tested for Singapore’s smart nation ambition.



 

Associate Professor Li Mo

Project: “Data Driven Sensing and Analysis for Smart Transportation”

Transportation plays an important role in influencing people’s daily life. For example, there are 2.8 million MRT/LRT trips, 1.0 million taxi trips, and 3.8 million bus trips every day in Singapore. Smarter and more efficient transportation is the common goal for many metropolitan cities, especially for those high growth and land-scarce cities which have no extra land for construction of additional transportation infrastructures. Thus it is critical to have an accurate and efficient way of monitoring and managing the transportation conditions to improve the utilization of transportation infrastructures. In this proposal, we plan to study the transportation sensing and analysis from a data point of view. The public transit systems generate a large volume of mobility data that contain digital footprints of the urban transportation conditions and operations, e.g., location reports from onboard GPS devices of taxis and buses, smartphone reports of bus and train riders, records of electric fare collection devices (e-farecard) deployed by all transit types (trains, buses, taxis, etc.). There are around 15 million farecard records for bus and MRT train trips, 50 million bus operation data, 80 million taxi trajectory data produced every day in Singapore. A combined analysis with the collection of those mobility data will present a unique data perspective for comprehensive urban transportation understanding which has not been thoroughly studied in the transportation community. The data driven study possesses major advantages in its high coverage of the mobility data, low cost in infrastructure deployment and maintenance, and timeliness in data acquisition and analysis. The success of this research study will generate new knowledge in computer science domain, i.e., a set of effective data sensing and analysis algorithms and systems tailored to mobility data and transportation applications. At the same time, people and government agencies will directly benefit from the systems, tools, and analytical results produced from this project, e.g., the understanding of travel demands of different transport modes and the in-situ traffic map and people mobility graph will help all to effectively respond to emergencies like MRT train service disruption and give suitable evacuation plans.



 

Associate Professor Lin Weisi

Project: “Significance-based Large-Scale 3D Point Cloud Compression and Management”

We live in a three-dimensional (3D) world. With recent rapid development of signal sensing and processing technology, massive 3D point-cloud data is increasingly available to directly describe and even actuate our 3D world, with vision-based 3D reconstruction (e.g., via structure from motion, stereo or multiple views), or more direct acquisition using 3D scanners, Kinect and so on. It is expected that 3D point clouds will be a widely-used data representation in more and more applications of 3D visual intelligence, e.g., city planning, security, disaster management, robot navigation, autonomous vehicles, event detection, virtual reality, and many other emerging uses to address sustainable and smart city challenges. Since more comprehensive information is available in 3D point clouds, in contrast with two-dimensional (2D) images, performance of applications and services in visual intelligence can be boosted. However, for large-scale 3D point clouds, especially for city-scale point ones, a huge amount of 3D point data need to be transmitted, stored and processed. This is easily limited by the constraints of bandwidth, memory, power/battery and computing resources, so there is an urgent call for effective and efficient technological development for point cloud compression and management.

A useful 3D point cloud may include millions (or even billions) of points, each point has multiple attributes, e.g., coordinates, colour, normal and derived local features, and therefore a practical point cloud takes up tremendous storage space and channel bandwidth. It is usually impossible to store large-scale point clouds without good organization and compression; in addition, point-cloud data is transmitted frequently during data acquisition, processing and applications. Take automatic navigation as an example: 3D point-cloud data needs to be transmitted from servers to mobile client terminals for 3D model reconstruction, quickly and economically with the provided channel bandwidth. Point-cloud compression and management are therefore vitally important, and formulation of 3D data significance assessment can devise a meaningful criterion to guide these processes for necessary discrimination and control.

There is no scheme with systematic consideration of significance of point-cloud data for various applications of 3D visual intelligence. Large-point-cloud related applications would be impossible or have a waste of computation, channel, energy and memory space if the data is handled without necessary discrimination and appropriate Level-of-Details (LoDs); furthermore, the performance of applications is difficult to control without explicit, effective formulation of data significance ("If you cannot measure it, you cannot improve it". --William Thomson). For example, given time, device and bandwidth constraints, the delay tolerance, computational complexity and bit budget of compressed point-cloud data can be defined; with a point-cloud data significance metric devised, source and channel coding strategies can be derived and optimized to achieve the best performance under the constraints. Hence, how to formulate and assess the significance of 3D point-cloud data with respect to different applications is the focus of the proposed research.

In this proposal, we will systematically explore a novel, integrated framework for 3D point-cloud compression and management in accordance with the formulated data significance measure. We will start the attempt by designing a hierarchical 3D point-cloud data representation scheme with the base layer (BL) and enhancement layers (ELs). Different point-cloud data attributes are arranged into different layers according to the frequencies of being used in most of applications of 3D visual intelligence. To be more specific, we arrange the most important and frequently used data, i.e., coordinate data, into the BL, while others (e.g., color, normal, and local feature descriptors) are arranged as ELs. The BL is encoded independently, and ELs are encoded with prediction using the BL; different kinds of 3D attributes in ELs are encoded independently from each other to support individual attribute data transmission for specific applications.

As the core of the project, we will investigate into point-cloud significance metrics to assess the influence of each point-cloud data on the performance of practical applications. In our formulation, point-cloud data with higher significance values have more important influence on the performance of relevant 3D visual intelligence, and are to be encoded first. We will further formulate the significance assessment according to the data acquisition mechanism of point clouds. For point clouds constructed from massive 2D images via a structure-from-motion method, we use the available point visibility in 2D images to define the significance of each point. For point clouds acquired from 3D scanners or Kinect devices, we plan to adopt machine learning to find the relationship between the performance of 3D visual intelligence algorithms and point-cloud data characteristics.

In addition, we propose significance-based point cloud compression methodology to optimize bit allocation. Significance-based adaptive quantization will be studied in our project to achieve minimum utility-rate costs. Specifically, 3D regions with higher significance will be quantized with smaller quantization steps, while those with lower significance can be quantized with larger quantization steps to achieve optimal utility-rate performance. The significance-based method can provide the best performance of 3D visual intelligence algorithms with a given budget of point-cloud data, which not only can save storage and bandwidth, but also significantly reduces the computational requirement for applications.

We will also explore new ways for compact 3D feature descriptors and feature prediction to reduce redundancy among features. For a compact 3D feature descriptor, we will train the optimal transforms to keep the most important elements in each feature descriptor vector, and to remove the redundancy inside each feature descriptor. For feature prediction, we can calculate the gradients from encoded 3D point-cloud coordinates in BL to predict the local feature descriptor in ELs.

The project team has excellent track record in visual signal and feature compression, computer vision, visual-saliency modelling, image-based 3D reconstruction, and geometry processing, and our substantial relevant previous work leading to this proposal has demonstrated the feasibility and benefits of feature compression, significance assessment and utility-based compression optimization. This project is expected to deliver a revolutionary technical framework for 3D point-cloud data compression and management toward resource-efficient of 3D visual intelligence in the big visual data era, for various applications and services.



 

Assistant Professor Pan Jialin Sinno

Project: “A Geo-Distributed Multi-task Learning Platform for Big Data Analytics ”

In the era of big data, developing distributed machine learning algorithms for big data analytics has become increasingly important yet challenging. Most of the recent developments of distributed machine learning optimization techniques focus on designing algorithms on learning a single predictive model under a setting where data of a certain task is distributed over different worker machines. Besides this setting, there is another natural setting of distributed machine learning where data of different sources (e.g. users, organizations) is geo-stored in local machines, and the goal is to learn a specific predictive model for each source.

Under this setting, a traditional approach is to regard learning on each source's data as an independent task and solve it locally for each source. This approach fails to fully exploit all the data available to learn a more precisely predictive model for each source. Another approach is through multi-task learning (MTL), where multiple tasks are learnt jointly with the help of related tasks. The aim is to explore the shared relevant information between the tasks to achieve better generalization performance than learning the tasks independently. Out of data privacy or security issue and communication cost of transmitting the data, it is not feasible to centralize the data of different tasks to perform multi-task learning. And even if the data can be centralized, the size of the total data could easily exceed the physical memory of the machine. However, most existing multi-task learning methods that have been developed could not be implemented directly in a distributed manner. Although there have been developments in data-parallel distributed algorithms for single task learning, distributed multi-task learning under the aforementioned setting remains challenging as these algorithms do not suit the multi-task learning formulation as multi-task learning requires joint optimization of parameters of different tasks.

To address the problem mentioned above, we propose a distributed multi-task learning algorithmic framework, denoted by DMTL, which allows multi-task learning to be done in a distributed manner when tasks are geo-distributed over different places and data is stored locally over different machines. In our proposed framework, a communication-efficient primal-dual distributed optimization technique is utilized to simultaneously learn multiple tasks as well as the task relatedness in the parameter server paradigm, with a theoretical convergence guarantee. We plan to conduct extensive experiments on both synthetic and real-world datasets to evaluate our proposed framework in terms of scalability, effectiveness, and convergence.



 

Associate Professor Cai Jianfei

Project: “New Local Illumination Model, Fine-grained Shape Recovery and Beyond”

The reconstruction of three-dimensional real world objects from images is an important topic in computer vision and computer graphics. Many techniques have been developed including multi-view stereo (MVS), shape-from-shading (SfS), photometric stereo (PS) and the combination of them. State-of-the-art high quality 3D reconstruction methods often use MVS or depth sensor to obtain a coarse shape, and then employ SfS or PS to refine surface details using shading cues, which are based on the relations among 3D geometry shapes, surface reflectance properties and lighting conditions. Along this line, the recent focus is on shape refinement under unknown natural illumination conditions, which is of significant interest in practice.

With unknown natural illumination conditions, almost all the existing shading based shape refinement works use simple low-frequency global illumination models to model the lighting, which can only recover surface details to a certain extent but not at fine-grained level. Also, most of the existing shading based works with unknown natural illumination assume some simple surface reflectance model such as Lambertian surfaces with constant albedo, which limits their application scope in practice. Thus, the fundamental question we want to address in this proposal is: given a coarse 3D shape, with unknown natural illumination conditions and without any assumption on lighting and surface reflectance, can we refine an object shape from images in a fine-grain and fast manner?

At the heart of our proposed solution is a local illumination model, which gives each vertex an overall illumination vector. Although it seems that introducing such dense local illumination model will make the SfS problem become untrackable, our preliminary work has shown that with properly designed regularizers or constraints, such local illumination model can benefit shape refinement significantly. Our local illumination model allows high-frequency lighting modelling which can better handle general natural illumination and its interactions with geometry and reflectance, and thus facilitate fine-grained shape refinement.

The success of local illumination model for shape refinement will be a breakthrough in this classical topic, since with unknown natural illumination and without using any light probe tool, no one has ever modelled the lighting at such a fine-grained level. If such fine-grained illumination can be recovered in an accurate and fast manner, it will lead to many applications including shape refinement, depth recovery, relief reconstruction, relighting, etc.


Back to listing