News & Events

Share        

MOE AcRF Tier 1 Grant Awards

Published on: 22-Aug-2017

Congratulations to the following professors for receiving MOE AcRF Tier 1 Grants

PI: Associate Professor Lin Feng

Co-PIs: Associate Professor (adjunct) Leong Khai Pang, Tan Tock Seng Hospital (LKC School of Medicine) and Associate Professor Qian Kemao, SCSE

Project: “Key Techniques for the Statistic Shape Modeling in Anatomical Structure Reconstruction, Segmentation and Registration”

This project is mainly motivated by the ready adoption of computerized image segmentation and registration in a wide range of clinical trials and routine practice. For instance, rheumatoid arthritis (RA), a systemic disease manifesting primarily with joint inflammation, affects about 1% of the population. One of the most important outcomes in practice is irreversible joint damage imaged with X-ray, ultrasound, CT scan and MRI. The joints in the hands are assessed in the Sharp-van der Heijde method. Accurate reconstruction of the 3D models is the basic requirement for the automated measurement and assessment. The same applies in the other clinical departments. All the computer-aided diagnoses and assessments are based on accurate reconstruction, segmentation and registration of the 2D and 3D images with different modalities. As the acquired images often contain noises, missing or overlapping/occluded data, our unique techniques for SSMs, especially with incomplete or corrupted datasets, will be of great value for both imaging equipment companies and clinical trial agencies. To address these challenges, this project aims to develop innovative technologies for SSM that are not only robust to abnormal training data in clinical applications but also satisfied with nonlinear distribution of the shape population. We propose an original and novel statistic modeling solution. Briefly, (i) To seek a low-rank linear subspace, a kernelized robust principal component analysis (KRPCA) algorithm will be deeply studied and fully implemented, to cope with nonlinear distribution. (ii) The nonlinear model is then evaluated with verified tissue segmentation in which component shapes are separately distributed, assuring practical usefulness in clinical applications. (iii) With the clinical collaborator’s assessments on large image datasets, we aim at a significantly higher quality in medical segmentation and registration than that of the other competing methods.



 

PI: Dr Owen Noel Newton Fernando

Co-PIs: Assistant Professor Erik Cambria and Associate Professor Ng Wee Keong

Project: “Twittener: Twitter speech synthesis with natural language processing”

Twitter is one of the most popular online social networking services in the world, with more than 500 million users and an approximate 320 million monthly active users. It allows users to get rapid and concise information, and to follow the latest online trends and topics. Despite the popularity, Twitter is mainly a text-based social networking service that is less accessible while driving or exercising, which require the user to be focused. Moreover, the elderly and people with disabilities (but also the less literate) may find it difficult to use text-based social networking services like Twitter. Hence, the proposed approach named “Twitterner” is designed to tackle these challenges through sentiment analysis and text-to-speech technologies in order to convert tweets from text to audio form. This involves translation of linguistic databases to new languages and clustering of sentences in high-dimensional space using deep neural networks.

In this project, we aspire to speed up the extremely time-consuming sentiment detection process involving multiple types of data such as audio, video, and text. To achieve this, we introduce syntactic and semantic multi-lingual features that can intelligently learn hierarchies of words and capture the structure in a sentence using different machine learning algorithms such as LDA, recurrent neural networks, deep convolutional learning and transfer learning, etc. The significant contribution of this research is to cultivate a new trend with the integration of text-to-speech in social media platforms to enhance the accessibility and usability of the social media applications. Furthermore, this project aims to propose a novel mechanism to employ sentiment analysis, trend detection, voice inputs and text to speech techniques on social media platforms to provide more personalized user experience to social media users.



PI: Associate Professor Lee Bu Sung

Co-PI: Associate Professor Deepu Rajan

Collaborator: Associate Professor Andrez Stefan Sluzek (Khalifa University,
Abu Dhabi)

Project: “Eyegaze estimation using Deep Appearance in Natural Environment”

In daily life, more than eighty percent of all information received comes from the eyes. Eyes and their movements play an important role in expressing human being’s desires, cognitive process, emotion states, and interpersonal relations. Eye tracking has been applied in various applications, such as human-computer interaction, marketing advertisement, and go beyond that to aid disable people. However, the main technology of eye-tracking applications, gaze estimation that is determining what a person is looking at, still performs poorly in natural environment, e.g. the extra commercial eye-tracking device (eye tracker) is indispensable, and head movement must be limited to small degree. These limitations prevent eye tracking from becoming a pervasive technology. As webcam is becoming a standard component in computers, especially mobile devices, replacing the eye tracker with the monocular camera would facilitate the prevalence of eye-tracking applications. The main goal of this project is to develop a highly accurate gaze estimation system with a simple and flexible setup, which can be integrated with existing interaction applications. There are three components tasks:

- Data collection: Currently there is no standard data-collection tool and dataset of gaze estimation, so we aim to develop a data-collection application to collect a number of training samples under natural environment, e.g. indoor, outdoor.

- Model generation: We will be building a training model that takes the pairs of gaze points and face images as inputs. The model will automatically learn the representative features of face image which is determining gaze point, and reveal the hidden relations between gaze space and face image space.

- System deployment: Since the target machines have limited computing ability, e.g. mobile devices, we aim to deploy a light-weight client with the trained model



 

Associate Professor He Ying

Project: “Poisson Vector Graphics and Closed-form Poisson Solver: Theory and Algorithms”

Vector graphics provides several practical benefits over traditional raster graphics, including sparse representation, compact storage, geometric editablity, and resolution-independence. Early vector graphics supports only linear or radial color gradients, diminishing their applications for photo-realistic images. Orzan et al. pioneered diffusion curve images, which are curves with colors defined on either side. By diffusing the colors defined along control curves, the final result includes sharp boundaries along the curves with smoothly shaded regions between them. Thanks to its compact nature, diffusion curves quickly gain popularity in computer graphics field. However, diffusion curves neither support control of color gradient, nor allow local change of shading and tone, hereby diminishing their usages to real-world applications.

To overcome the limitations of diffusion curves, we present a brand new type of vector graphics, called Poisson vector graphics (PVG). In contrast to diffusion curves, which are harmonic functions, PVG is the solution of Poisson's equation, whose solution space is much larger than that of Laplace's equation, hereby providing users more control of the image. The overarching goal of this project is to expand the horizon of Poisson vector graphics at both theoretic and practical fronts:

(1) At the theoretic level, we develop a novel closed-form solver for solving 2D and 3D Poisson equations. In contrast to the existing finite element method that computes numerical solutions only, our method naturally supports random access evaluation, zooming-in of arbitrary resolution and anti-aliasing.

(2) At the application level, we propose a computational framework for generating and rendering PVGs. Armed with two new types of primitives, namely Poisson curves and Poisson regions, PVG can easily produce photo-realistic effects such as specular highlights, core shadows, translucency and halos. We also develop a fully automatic algorithm for converting a natural image to an editable PVG with sparse geometric features.


Fig. 1. A Poisson vector graphics (PVG) consists of the popular diffusion curves (DCs) and two new types of primitives, called Poisson curves (PCs) and Poisson regions (PRs). By manipulating the Laplacian constraints associated with PCs and PRs, users can easily control global and local shading profiling and produce photorealistic effects such as specular highlights and core shadows, which are difficult to obtain using diffusion curves only.



 

Assistant Professor Erik Cambria

Project: “Big Social Data Analysis”

Big social data analysis is about processing big volumes of online social data coming from different sources in various formats and at different paces. Big social data analysis represents a holistic approach to the study of interaction between web users. Early works focused on measuring either the intensity of such interaction (e.g., social network analysis) or the content of it (e.g., sentiment analysis). This project aims to concomitantly collect, aggregate, and process both content and intensity of online social interaction. In particular, we use sentic computing for the former, and community embeddings for the latter. Most existing graph embedding methods focus on nodes, which aim to output a vector representation for each node in the graph such that two nodes being "close" on the graph are close too in the low-dimensional space. Despite the success of embedding individual nodes for graph analytics, we noticed that an important concept of embedding communities (i.e., groups of nodes) was missing. Embedding communities is useful, not only for supporting various community-level applications, but also to help preserve community structure in graph embeddings. In fact, we see community embeddings as providing a higher-order proximity to define the node closeness, whereas most of the popular graph embedding methods focus on first-order and/or second-order proximities. To learn community embeddings, we hinge upon the insight that community embeddings and node embeddings reinforce with each other.



PI: Dr Smitha Kavallur Pisharath Gopi

Co-PI: Assoc Prof Vinod Achutavarrier Prasad

Collaborator: Dr Sharad Sinha

Project: “Automatic Design Synthesis and Optimization of Machine Learning Algorithms on Resource Constrained Devices”                

This project aims to develop a semi-automatic design synthesis and optimization flow targeting machine learning applications on resource constrained devices. Wearable biomedical devices are a prime example of resource constrained devices. However, they provide immense flexibility in providing healthcare services. Hence, this project aims to move machine learning implementations from enterprise and desktop software to hardware on next generation wearable devices. This would allow delivery of personalized healthcare by being able to exploit machine learning algorithms closer to the care receiver. This would facilitate real-time delivery of personalized healthcare by reducing reliance on data networks for transmission and processing of data. The semi-automatic design flow will rely on high level synthesis methodology and combine it with domain specific knowledge related to machine learning algorithms and digital hardware design. Evolutionary algorithms based randomized search would be used to search the design space jointly created by the design alternatives available in the two domains. The evolutionary search would seek to find design architectures that satisfy constraints related to computational resources, power and performance while meeting numerical accuracy and decision-making requirements in machine learning algorithms. The design architectures resulting from the design space exploration would be turned into hardware description using high level synthesis. This project is significant from an academic point of view as well as for Singapore. It is academically significant because very little prior work exists on implementing machine learning algorithms on resource constrained devices. Most implementations have focused on the accuracy of machine learning algorithms under the assumption of unlimited supply of computing power. This project is of significance and relevance to Singapore because it would enable smart healthcare services and delivery to its population. It would also motivate care receivers to be pro-active as data privacy issues, generally related to data networks and servers, will be minimized.



 

Assistant Professor Lin Shang-Wei

Project: “Automatic Program Repair and Synthesis based on Causality Semantics”

In a software development life cycle (SDLC), testing/verification and debugging usually consume significant time and effort. If a bug is found, the designer or the engineer has to manually analyse the bug to find the root cause of it. Even after the root cause is identified, fixing or repairing the bug is usually non-trivial because it sometimes requires human creativity and expertise. To automate the program repair process, we aim to develop a program repair framework based on causality semantics, which is capable of not only repairing/fixing complicated buggy programs (i.e., modifying multiple statements at the same time), but also proving that the repaired code conforms to the specification, i.e., the designer’s expectation, with respect to the causality semantics. To be specific, the aims of the project are as follows:

1) Causality Semantics: we aim to construct causality semantics between different types of statements such that we can investigate the actual cause of a bug if it is caused by multiple statements in a buggy program.

2) Automatic Program Repair: based on the constructed causality semantics, we aim to develop techniques to automatically repair a buggy program. The repair techniques are able to not only identify the actual cause of the bug but also modify or synthesize correct statements to repair the bug. We also aim to deal with inter-procedure programs as well as concurrent programs.

3) Correctness Proof: with a given causality semantics, we aim to prove that the generated repair program still conforms to the specification with respect to the causality semantics.

4) Behaviour Model of the Program: we also aim to construct the behaviour model of the repaired program. The behaviour model can help the designer to understand more about the program. Sometimes, the designer is not aware that the specification does not conform to his/her intention, or the implementation does not conform to the specification. A behaviour model can help to eliminate the inconsistency.



 

Associate Professor Kong Wai-Kin Adams

Project: “Generating Tattoo Images from Generic Images”

Tattoos, an effective means for criminal identification, have been regularly used by law enforcement agencies around the world to search suspects. Currently, tattoo images collected from suspects and prisoners are processed manually. It is a costly and errant approach. More clearly, the collected tattoo images are first located and cropped and then labelled by law enforcement officers manually. The labels given by the law enforcement officers are used to retrieve suspects with similar tattoos described by witnesses. Researchers have developed tattoo matching, detection and localization methods, but none of them attempts to develop methods for tattoo annotation. Tattoo annotation is arguably the most critical step in the process, because once the labels are wrong, suspects cannot be retrieved. However, intra- and inter-observer errors are unavoidable. To address this problem, the National Institute of Standards and Technology (NIST) established a standard for manual annotation. The standard has been widely used in U.S., but law enforcement agencies in many countries have not adopted any standard. Though the NIST standard can reduce intra- and inter-observer errors, it does not solve two major problems in tattoo annotation: 1) the NIST standard is still based on manual annotation and 2) the standard has only limited predefined labels, e.g., cross and flag. Given enough training images for a particular tattoo label, e.g., car tattoo, it is possible to use deep learning to perform automatic annotation. However, some tattoo images are difficult to collect, e.g., Singapore flag tattoos, because they are not popular and only a limited number of people have these tattoos. In this project, we aim to transfer normal images (e.g., Singapore flag) to tattoo images (e.g., Singapore flag tattoo) and retain their class labels. It is the first step to automatize the tattoo annotation process.



 

Associate Professor Cong Gao

Project: “Novel Embedding models for Representing Spatial Items”

Recent years have witnessed the rapid growth of location-based services, such as Foursquare, Twitter and Google places. We mainly consider three types of spatial items: 1) Location, which is one particular place with geographical coordinates; 2) Trajectory, which is a sequence of discrete locations and describes the underlying route of a moving object over time; 3) Region, which is a geographical area within a certain range and contains a variety of locations.  Based on the spatial data, many research problems have been studied, such as POI recommendation, trajectory search, and region suggestion. The main challenge of these problems lies in the difficulty to effectively learn the relationships among spatial items. To this end, we resort to the embedding models, which attract extensive research attention in the community of artificial intelligence and data science. We aim at developing novel embedding models to represent spatial items in a latent low-dimensional space, such that the characteristics of items and the relations among them are captured by their representations. Deep representation of spatial items will have many applications in smart nation.

Back to listing