previous arrowprevious arrow
next arrownext arrow
Introduction to Professor Jonghye Woo and His Lab at Massachusetts General Hospital/Harvard Medical School
  1. Could you briefly introduce yourself (and your University/Lab)?

I am currently an Assistant Professor in the Department of Radiology at Harvard Medical School and Massachusetts General Hospital (MGH) in Boston. I received the B.S. degree from Seoul National University, Seoul, Korea, in 2005, and the M.S. and Ph.D. degrees from the University of Southern California (USC), Los Angeles, in 2007 and 2009, respectively, all in electrical engineering. I worked as a research associate at Cedars-Sinai Medical Center in Los Angeles in 2010. Then, I was a Postdoctoral Fellow and a Research Associate at the University of Maryland and a visiting scientist at Johns Hopkins University in Baltimore from 2010 to 2014. I received the USC Viterbi School of Engineering Best Dissertation Award in 2010 and the NIH K99/R00 Pathway to Independence Award in 2013.

The overarching goal of my lab is to bridge engineering and medical science through advanced imaging and analysis and machine learning. My lab is part of the Gordon Center for Medical Imaging at MGH and is currently focused on a variety of medical image analysis and machine learning for the applications of the tongue, the heart, and the brain. We have collaborated with a diverse group of expertise, such as speech scientists, neuroscientists, medical physicists, and software engineers.

  1. What have been your most significant research contributions up to now?

Our team developed the first version of a 3D vocal tract atlas from structured MRI and a 4D multimodal statistical atlas of the tongue during speech from cine- and tagged-MRI. The 3D and 4D multimodal atlas, the first of its kind, will provide common spatial and spatiotemporal coordinate systems within which multiple subjects can be registered to compare directly their features, respectively. In addition, our team developed a variety of machine/deep learning techniques to identify hidden and uncharted structures, functional units or muscle synergies, of tongue motion during speech from cine- and tagged-MRI.

  1. What problems in your research field deserve more attention (or what problems will you like to solve) in the next few years, and why?

In the next few years, we will focus on a variety of deep learning challenges. For example, we will focus on multimodal deep learning that jointly fuses information to learn a shared representation from disparate modalities, including imaging data and information from other sources. This is challenging, because of the disparate statistical properties and highly non-linear relationships between “low-level” features inherent in the multiple sources of information. The criticisms of many deep learning methods are partly attributed to the difficulty in interpreting intermediate results and understanding the relationships that deep learning learns, thereby behaving like a “black-box.” We will also try to disambiguate this black-box nature of deep learning using various techniques.

  1. What advice would you like to give to the young generation of researchers/engineers?

It is always important to set a long-term career goal that states where you want to be in 5-10 years from now (e.g., academia vs. industry). When solving new research problems, it is important to take a formal yet innovative approach in understanding problems and coming up with various solutions to tackle them. In particular, it is important to develop robust hypotheses and likely outcomes of experiments before carrying out them to increase the likelihood of success.