Won Hwa Kim
Won Hwa Kim, Ph.D, is an Assistant Professor in Computer Science and Engineering at the University of Texas, Arlington. His research is focused on developing novel methods in Machine Learning and Computer Vision in the context of their applications in neuroimaging, neuroscience and healthcare. Specifically, he is interested in multi-resolution analysis of data that live in non-Euclidean spaces. This research thrust is motivated by the observation that a broad variety of statistical inference tasks on signals/functions can be made far more effective using a multi-resolution view of measurements. Many of his works are presented in top-tier AI conferences such as NIPS, CVPR, ICCV, ECCV as well as in Neuroimaging journals such as Neuroimage.Research in NeuroImage Analysis
The most fundamental problem in developing methods for statistical group analysis for neuroimages is to increase sensitivity and specificity. In many neuroimage studies, datasets are usually small sample-sized due to the limited number of participants and cost of scans. Also, most studies have shifted their focus towards studying `pre-clinical' stages of diseases with small effect size, and detecting such subtle differences with a small number of samples is often the holy grail in practice.
Fig. 1. Brain surface represented as a
graph (i.e., mesh) and cortical thickness
values defined over the graph.
Dr. Kim is interested in overcoming such bottlenecks on the statistical methods for neuroimages / image-derived measures especially for the ones that live in non-Euclidean spaces. For example, cortical thickness measurements on brain surfaces are given as a scalar valued function defined on a manifold (an example shown in Fig. 1). Also, structural brain networks derived from whole brain tractography can be represented as a weighted graph and provide a sense of the pathways connecting different brain regions.
To address the analysis needs above, he proposed a multi-resolution shape descriptor using wavelet transform on graphs, which captures the local context of signals directly at each vertex of a graph [NIPS 2012, NeuroImage 2014] as well as a multi-scale descriptor for structural brain connectivity represented as individual-level weighted graphs to perform group analysis [MICCAI 2013, NeuroImage 2015]. These representations yielded surprising improvements in statistical power for the analysis of various biomarkers on brain surface and brain connectivity as shown in Fig. 2.
Fig. 2. Brain regions identified as showing variation due to Alzheimer’s disease (AD) or risk factors. Left: using cortical thickness, Right: using brain connectivities.
Research in Machine Learning and Computer Vision
Fig. 3. An example of graph completion on a cat
shaped graph. Left:partial observation, Right:
recovered signal using harmonic analysis on graphs.
The problem of missing or partially observed data is ubiquitous in science --- an issue that is becoming more relevant within the translational/operational aspects of modern computer vision and machine learning as a completion problem. Dr. Kim is studying problems which are extensions of the traditional matrix completion/data imputation to the complex space: ideas related to multi-resolution analysis for estimating missing observations defined on graph nodes (i.e., a Graph Completion problem).
To formally define the problem, given a graph and partial signals on the graph nodes, the objective is to estimate the full signal utilizing the graph structure and harmonic analysis. He looks into this problem from the perspective of multi-resolution and collaborative filtering, proposing an adaptive sampling scheme and estimation of the full signal by taking advantage of the bandlimited and sparse nature of signals in the frequency space [ECCV 2016, CVPR 2017].
Fig. 4. Various results on object category estimation in Imgur images where the state-of-the-art object detection algorithm (i.e., YOLO) did not confidently detect any objects. The categories in the parenthesis are the image categories estimated using Dr. Kim’s framework.
This framework was applied on a large scale set of images collected from Imgur and yielded promising results on image category estimation. A graph was constructed based on comments and a state-of-the-art Deep Learning (DL) object detection algorithm (i.e., YOLO) was used to assign image categories on the images. Unfortunately, the object detection did not work on 40% of the images and this framework estimates the potential labels of the objects in those images and some interesting results are shown in Fig. 4.