lung segmentation

Automated segmentation of anatomical structures is a crucial step in many medical image analysis tasks. We show that a basic approach - U-net - performs either better, or competitively with other approaches on both routine data and published data sets, and outperforms published approaches once trained on a diverse data set covering multiple diseases. Training data composition consistently has a bigger impact than algorithm choice on accuracy across test data sets. 




A key question in learning from clinical routine imaging data is whether we can identify coherent patterns that re-occur across a population, and at the same time are linked to clinically relevant patientparameters. Here, we present a feature learning and clustering approach that groups 3D imaging data based on visual features at corresponding anatomical regions extracted from clinical routine imaging data without any supervision.

Weakly-supervised learning

weaky supervised

To learn models that capture the relationship between semantic clinical information and image elements at scale, we have to rely on data generated during clinical routine (images and radiology reports), since expert annotation is prohibitively costly. Here, we show that re-mapping visual features extracted from medical imaging data based on weak labels that can be found in corresponding radiology reports creates descriptions of local image content that captures clinically relevant information. In medical imaging (a) only a small part of the information captured by visual features relates to relevant clinical information such as diseased tissue types (b). However, this information is typically only available as sets of reported observations on the image level. Here, we demonstrate how to link visual features to semantic labels (c), in order to improve retrieval (d) and map these labels back to image regions (e).


x ray processing

X-ray is a commonly used low-cost exam for screening and diagnosis. However, X-ray radiographs are 2D representations of 3D structures causing considerable clutter impeding visual inspection and automated image analysis. Here, we propose a Fully Convolutional Network to suppress undesired visual structure from radiographs while retaining the relevant image information such as lung-parenchyma.


fibrosis progression

Generating disease progression models from longitudinal medical imaging data is a challenging task due to the varying and often unknown state and speed of disease progression at the time of data acquisition, the limited number of scans and varying scanning intervals. We propose a method for temporally aligning imaging data from multiple patients driven by disease appearance. It aligns followup series of different patients in time, and creates a cross-sectional spatio-temporal disease pattern distribution model.