Sparse MRF Appearance Models (SAMs)

Many image segmentation methods, like Active Shape Models, Active Appearance Models, Active Contours or non-rigid registration approaches require an initialization that guarantees considerable overlap or coarse correspondence with the object that is segmented or registered. 

Sparse MRF Appearance Models (SAMs) represent an approach that localizes anatomical structures in a global manner by formulating the localization task as a Markov Random Field (MRF). SAMs relate trained a priori information about the geometric configuration of interest points and local appearance features (local descriptors) to a set of candidate points in the target image and encode these correspondence probabilities as an MRF. The MRF is solved using the MaxSum algorithm, resulting in a mapping of the modeled object (e.g. a sequence of vertebrae) to the target image interest points.

The approach does not require initialization and finds the most plausible match of the query structure in the image. It provides for precise, reliable and fast localization of the structure and can serve as initialization for more detailed segmentation or registration steps.

The implementation is freely available. 

Sparse MRF Appearance Models for Fast Anatomical Structure Localisation
BMVC 2007, Best Scientific Paper Prize
René Donner, Branislav Micusik, Georg Langs, Horst Bischof