The computational modeling conversation continues…

The computational modeling conversation continues…

By By Yansong Zhu, Johns Hopkins University




The final session from Day 1 of the Incubator was Performance Metrics/Task-based Assessment. Andrew Watson, from NASA Ames Research Center focused on visual performance metrics for imaging systems. He analyzed the problems of traditional models and discussed improvements that could be made using new approaches. His used examples of letters, aircraft, and watercraft to further illustrate his improvements. Next, Meredith Kupinski, from the University of Arizona, discussed model observation for image quality evaluation. She mentioned that image quality is statistical and can never be defined with a single image. The optimal observer for detection and estimation requires a full characterization of image statistics and characterizing image statistics usually requires an unrealistic quantity of sample images. She also gave some examples to show the performance of her model. After she concluded her discussion, the first day of Computational Modeling Incubator came to the end.
 
Day 2
This morning the meeting entered began with the Emerging Technologies session. Abhinav Jha, from Johns Hopkins University, presented his research results about the evaluation of quantitative imaging methods in the absence of a good standard. He has developed a tool he calls the No-Gold-Standard (NGS), and he first gave the relation between true and estimated quantitative values and assumptions he uses in it. He then compared and ranked three segmentation methods to show how the tool works. , He also pointed out issues with the tool and talked about future work. After that, Aria Pezeshk, from the U.S. Food & Drug Administration, discussed the topic of deep learning and how that can apply to the detection of lesions in medical imagery. He presented a number of traditional machine learning methods and listed their drawbacks. He then showed the framework of deep learning methods and focused on the use of convolutional neural networks (CNNs). Several methods have been used for fast and generalizable training. He next discussed applications for deep learning and used research being performed by Google and Baidu as examples. Finally, he talked about lesion insertion for data augmentation. A lively discussion about deep learning and neural networks was conducted after the presentation.
 
At the end of the meeting, Joseph Reynolds gave a summary of the main issues facing communities interested in design performance metrics for imaging system design and evaluation. An interesting and useful discussion followed that compared the various needs of both the military and medical imaging communities.
 
The OSA Incubator on Computational Modeling & Performance Metrics for Imaging System Design & Evaluation successfully brought together people from different application areas to share their knowledge and their new ideas. Thanks for reading this and stay tuned for more blogs about future OSA Incubators or learn more about the program at www.osa.org/incubator!


Incubator Attendees show their OSA Centennial pride!
 

Posted: 15 April 2016 by By Yansong Zhu, Johns Hopkins University

Tags: Biomedical Optics, Imaging, Incubator, OSA Incubator