Image Segmentation based on Tree Equipartition, Bayesian Flooding and Region Merging
Our image segmentation framework
involves feature extraction and classification in feature space,
followed by flooding in spatial domain, based on the computed
local measurements and distances from the distribution of features
describing the different classes. The globalism of the description
ensures spatial coherence using the properties of the label
dependent distances. The distribution of the features for the
different classes are obtained by block-based unsupervised
clustering based on the construction of the minimum spanning tree
of the blocks' grid using the Mallows distance and the
equipartition of the resulting tree. The final clustering is
obtained by using the k-centroids algorithm. With high
probability and under topological constraints, connected
components of the maximum likelihood classification map are used
to compute a map of initially labelled pixels. An efficient
flooding algorithm, namely, Priority
Multi-Class Flooding Algorithm (PMCFA), assigns pixels to
labels using Bayesian dissimilarity criteria. A new region merging
method, which incorporates boundary information, is the last step for
obtaining the final segmentation map.
Below are given the segmentation results on
the whole Berkeley benchmark data set.
Click on the thumbnail to view the image and the segmentation result