Johannes Hofmanninger

Contact 

prof2

Johannes Hofmanninger
johannes.hofmanninger @ meduniwien.ac.at
Tel: +43 1 40400 73723
Computational Imaging Research Lab
Department of Biomedical Imaging and Image-guided Therapy
Medical University of Vienna
Waehringer Guertel 18-20
A-1090 Vienna / Austria

Office
Anna Spiegel Center of Translational Research
(building 25, floor 7, room 26)

Research Interests

- Content based image retrieval
- Spatial normalization
- Unsupervised and weakly supervised machine learning
- Unsupervised population analysis in medical routine data
- Big data in medicine

Short CV

Johannes Hofmanninger is a PhD Student at the Medical University of Vienna. He graduated from Vienna University of Technology with a BSc. (2010) and MSc. (2014) degree in computer science. The academic major was medical informatics.

Master's Thesis Learning and Indexing of Texture-Based Image Descriptors in Medical Imaging Data, TU Wien, 2014

Publications

Johannes Hofmanninger, Forian Prayer, Jeanny Pan, Sebastian Röhrich, Helmut Prosch and Georg Langs. "Automatic lung segmentation in routine imaging is primarily a data diversity problem, not a methodology problem".  Eur Radiol Exp 4, 50 (2020). https://doi.org/10.1186/s41747-020-00173-2

Johannes Hofmanninger, Sebastian Roehrich, Helmut Prosch and Georg Langs,"Separation of target anatomical structure and occlusions in thoracic X-ray images", 1 2020, https://arxiv.org/abs/2002.00751

Johannes Hofmanninger, Bjoern Menze, Marc-André Weber, and Georg Langs. " Mapping Multi-Modal Routine Imaging Data to a Single Reference via Multiple Templates " ML-CDS Workshop at International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). 2017.

Johannes Hofmanninger, Markus Krenn, Markus Holzer, Thomas Schlegl, Helmut Prosch, and Georg Langs. "Unsupervised Identification of Clinically Relevant Clusters in Routine Imaging Data" International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). 2016.

Johannes Hofmanninger, and Georg Langs. "Mapping Visual Features to Semantic Profiles for Retrieval in Medical Imaging" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2015.

 

Research

Segmentation:

lung segmentation

Automated segmentation of anatomical structures is a crucial step in many medical image analysis tasks. We show that a basic approach - U-net - performs either better, or competitively with other approaches on both routine data and published data sets, and outperforms published approaches once trained on a diverse data set covering multiple diseases. Training data composition consistently has a bigger impact than algorithm choice on accuracy across test data sets. 

Phenotyping:

phenotyping

 

A key question in learning from clinical routine imaging data is whether we can identify coherent patterns that re-occur across a population, and at the same time are linked to clinically relevant patientparameters. Here, we present a feature learning and clustering approach that groups 3D imaging data based on visual features at corresponding anatomical regions extracted from clinical routine imaging data without any supervision.

Weakly-supervised learning

weaky supervised

To learn models that capture the relationship between semantic clinical information and image elements at scale, we have to rely on data generated during clinical routine (images and radiology reports), since expert annotation is prohibitively costly. Here, we show that re-mapping visual features extracted from medical imaging data based on weak labels that can be found in corresponding radiology reports creates descriptions of local image content that captures clinically relevant information. In medical imaging (a) only a small part of the information captured by visual features relates to relevant clinical information such as diseased tissue types (b). However, this information is typically only available as sets of reported observations on the image level. Here, we demonstrate how to link visual features to semantic labels (c), in order to improve retrieval (d) and map these labels back to image regions (e).

X-ray

x ray processing

X-ray is a commonly used low-cost exam for screening and diagnosis. However, X-ray radiographs are 2D representations of 3D structures causing considerable clutter impeding visual inspection and automated image analysis. Here, we propose a Fully Convolutional Network to suppress undesired visual structure from radiographs while retaining the relevant image information such as lung-parenchyma.