Mathematical Morphology:

Proceedings of the VIth International Symposium: ISMM 2002

Mathematical morphology is a powerful methodology for processing and analysing the shape and form of objects in images. The advances in this area of science allow for application in the digital recognition and modeling of faces and other objects by computers.

Mathematical Morphology is comprehensive work that provides a broad sampling of the most recent theoretical and practical developments in applications to image processing and analysis. Subject areas covered include: binary morphology, regularised region growing, morphological scale-space techniques, levelings, reconstruction, modeling and simulation, and applications as diverse as medicine, forestry and geology.

This fascinating research will be of great interest to engineers, computer scientists, mathematicians and statisticians whose research work is focussed on the theoretical and practical aspects of non-linear image processing and analysis. The content stems from the proceedings of the VIth International Symposium on Mathematical Morphology, held April 3–5, 2002 in Sydney, Australia.

    1. Page 1
    2. Page 17
      Abstract

      We describe the “DjVu” (Déjà Vu) technology: an efficient document image compression methodology, a file format, and a delivery platform that together, enable instant access to high quality documents from essentially any platform, over any connection. Originally developed for scanned color documents, it was recently expanded to electronic documents, so DjVu has now truly become a universal document interchange format.

      With DjVu, a color magazine page scanned at 300dpi typically occupies between 40KB and 80KB, i.e. approximately 5 to 10 times smaller than JPEG for a similar level of readability (the typical compression ratio is 500:1). Converting electronic documents to DjVu also offers substantial advantages, as described in the paper. The technology relies on a classification of each pixel as either foreground (text, drawing) or background (pictures, paper texture and color), thereby producing a segmentation into layers that are compressed separately. The novel contribution of this paper is a unified approach for segmentation of scanned or electronic documents, using a rigorous approach based on the Minimum Description Length (MDL) principle.

      The foreground layer is compressed using a pattern matching technique taking advantage of the similarities between character shapes. A progressive, wavelet-based compression technique, combined with a masking algorithm, is then used to compress the background image at lower resolution, while minimizing the number of bits spent on the pixels that are otherwise covered by foreground pixels. Encoders, decoders, and real-time, memory efficient plug-ins for various web browsers are available for all the major platforms.

    3. Page 37
      Abstract

      We recently realized that the Marr-Hildreth edges, computed as the zero crossings of the image Laplacian, can be viewed as optimal edge integration curves solving a geometric variational problem. We used this observation to derive a new set of edge integration and object segmentation procedures. Here we show that the edge detectors proposed by Haralick, and subsequently claimed to be optimal, in some sense (based on ID criteria) by Canny, and then numerically and computationally enhanced by Deriche, can also be interpreted as optimal edge contours whose normals align with the image gradient field, and further satisfy a topological uniformity measure inside a closed region defined by the contour. The combination of these two measures yield a robust edge detection/integration procedure, as well as better geometric curve evolution based segmentation procedures. In other words we provide 2D variational reasoning for the classical Haralick/Canny/Deriche-like edge detectors. We show how to use this new formulation for introducing novel geometric segmentation by curve evolution.

    4. Page 47
      Abstract

      Levelings are connected operators. On a digital grid a function g is an elementary leveling of f if and only if g = (fδg) ⋁ ɛg (where δg and ɛg are respectively the maximum and minimum of g on an elementary neighborhood of the grid). Such a leveling enlarges the fiat zones: each flat zone of g is included in a flat zone of f. A more general leveling may be associated to each couple of an extensive dilation α and its adjunct anti-extensive erosion β. In the present paper, we show that such a leveling, defined by g = (fαg) ⋁ βg still is a connected operator. We describe the associated flat zones, and characterize floodings, razings, levelings and flattenings associated to α and β.

    1. Page 69
      Abstract

      The morphological approach to image segmentation is based on the watershed transform. It is a very powerful tool which presents many advantages: it is not parametric, computationally efficient … However, in comparison with energy-based methods as active contours, it does not allow to control the smoothness of the result. In order to introduce geometrical regularization constraints in the morphological segmentation scheme, two options are possible. The first one consists in simulating a viscous flooding for the construction of the watershed. The second one consists in computing the watershed on a smooth relief. In this article, the second alternative is chosen. The relief is modified so that its non viscous flooding is equivalent to the viscous flooding of the original relief. This choice allows to clearly separate the smooth procedure from the strict watershed computation and thereby to preserve the qualities and speed of the watershed transform. Performances of our regularization method are illustrated and discussed on an ultrasound imaging application.

    2. Page 79
      Abstract

      Let E be an arbitrary space, and δ a dilation of into itself, with an adjoint erosion e. Then, the image of by d is a complete lattice where the sup is the union and the inf the opening of the intersection according to de. The lattice , named viscous, is not distributive, nor complemented. Any dilation a: on admits the same expression in . However, the erosion in is the opening according to de of the erosion in . The image under d of any connection on is a connection on . Moreover, if , then the elementary connected openings ?x of and are linked by the relation . Two examples, binary and numerical (this latest one comes from the heart inlaging), prove the relevance of viscous lattices in interpolation problems.

    3. Page 91
      Abstract

      Region growing algorithms, including seeded region growing, are a widely employed technique used in image segmentation problems. It is common for region growing techniques to employ queue based techniques that sequentially add pixels to regions. These algorithms are very flexible, but it is sometimes desirable to be able to apply some constraints to the growing process that reflect some higher level knowledge about the problem.

      This paper describes a technique that constrains (regularizes) the border smoothness of a growing region using a polygonal representation and provides some results illustrating a possible application. The technique can be usefully combined with traditional, unregularized techniques.

    1. Page 101
      Abstract

      A novel method of texture primitive description using morphological skeleton is proposed. The skeleton of an object has the property that it is reduced to one point when the structuring element used for the skeletonization is exactly homothetic to the object. This method applies this property. This method minimizes the number of pixels contained in the skeleton. If we assume that the texture is composed of one primitive, the structuring element minimizing the number of pixels is homothetic to the primitive of the texture because of the above property of the skeleton. Simulated annealing is employed for the minimization. This method has an advantage that it requires no assumption on the sizing distribution of grains in the texture.

    2. Page 109
      Abstract

      A novel method of the multiprimitive texture analysis is proposed. This method segments a texture using the watershed algorithm into fragments each of which contains one grain. The size density function of each fragment is calculated, and the fragments are located in the feature space each of whose basis is the size density of a size. The shape of each grain is distorted by the segmentation if the grains overlap, and the watershed algorithm may cause the over-segmentation. Thus the fragments containing grains corresponding to one primitive scatter in the feature space. However, the following cluster analysis collects neighborhood fragments in the feature space into a cluster. The grains in a cluster are regarded as corresponding to one primitive. The number of distinctive primitives shapes is obtained as the number of distinctive clusters, and each primitive is obtained as the central fragment of each cluster.

    3. Page 117
      Abstract

      A state of the art of 3D surface characterization techniques in the automotive and steel industries is presented, and links with mathematical morphology are highlighted.

      A surface topography decomposition method is described. It decomposes a surface into three elements: reference surface (waviness and form), superficial roughness (related to friction and wear) and valleys (related to lubricant circulation and reservoirs). It is applied to cylinder liners from an internal combustion V6 engine in order to remove form and waviness components. The study of the resulting superficial roughness component allows a precise wear characterization.

    4. Page 127
      Abstract

      In this paper we propose important improvements to the most common image editing tools (namely the pencil, the magic wand and the lasso) that are implemented in most commercial image editing software. The improvements are based on a multi-scale segmentation approach that relieves the user from the exact definition of contours while obtaining an accurate result.

    5. Page 137
      Abstract

      In this paper novel algorithms developed to automatically segment forest cover types are presented. The digital image analysis followed is mainly based on mathematical morphology operators and exploits the textural features in high spatial resolution images. These input images consist of true colour digital orthophotos and the studied forest covers consist of the main occurrences in Portugal: olive trees, cork oak trees, pine and eucalyptus trees.

    6. Page 147
      Abstract

      In this paper a novel methodology to construct decision region borders that geometrically models the training sets of points is presented. It is shown that with the incorporation of the features of the training sets more correct decision borders are designed and, in consequence, higher classification rates are obtained. This methodology is completely based on mathematical morphology operators and is illustrated with two features (wetness’ tasselled cap and NDVI’s vegetation index) of seven land cover classes (forest (3), soil (2), vegetation and water) constructed from remotely sensed images of a region in centre Portugal.

    7. Page 157
      Abstract

      In this paper, a method is described for producing a skeleton of a 3D branching object, the ureteric tree of the developing mouse kidney. The algorithm has to deal with scale differences between the 3 directions of space, and therefore, a Scaled Distance Function and a Weighted Geodesic Distance are defined. The extremities of the branches are defined as maxima of the Weighted Geodesic Distance and they are linked together using centred paths in order to obtain a skeleton. The results of the algorithm are presented as well as their value in studying the developing kidney.

    8. Page 165
      Abstract

      This paper outlines methods to perform basic spatial operations on 2D digital images that have been mapped into an equivalent set of digital projections. The digital projections are generated using the Discrete Radon Transform (DRT). Each digital projection is similar to an oriented straight-line integral of the Radon transform and the set of digital projections resembles the sinogram from classical tomography. Mapping discrete 2D data to a set of discrete ID projections provides representational advantages and significant computational efficiency for some operations. The 2D distribution of image pixels viewed along each direction is expressed through one ID digital projection. The integration of image values to form a directed projection smooths and hence compresses the variance of the original data. The DRT may, however, complicate the comparison of individual pixel values across a local area of the image, because it stores values that are global sums of local intensities. Pixel locations across a region of interest can be linked to those digital projections that contain contributions from those local values. Groups of pixels can be selected by particular translations of digital rays to form periodic lines. Algorithms for generalized linear convolution operations in the DRT domain are presented, as is an initial review of the effect of non-linear, rank-based operators applied to digital projections in the DRT space.

    9. Page 175
      Abstract

      Several libraries dedicated to mathematical morphology exist. But they lack genericity, that is to say, the ability for operators to accept input of different natures —2D binary images, graphs enclosing floating values, etc. We describe solutions which are integrated in Olena, a library providing morphological operators. We demonstrate with some examples that translating mathematical formulas and algorithms into source code is made easy and safe with Olena. Moreover, experimental results show that no extra costs at run-time are induced.

    10. Page 185
      Abstract

      In video compression, image pre-processing is used to improve the overall compression performance. A key requirement of any such system is the ability to remove noise and other higher frequency details while preserving the visual quality of the images. This paper presents a new approach to pre-filtering, based on area morphology. Both the standard area and a new constraint, based on the power within the granule, are investigated and psycho-visual experiments used to determine the highest value of each constraint for which the filtering is visually lossless. The compressibility of the pre-processed images, assessed by a number of compression methods, is significantly improved thus demonstrating the advantages of the area morphology methods, with the power constraint providing the greatest improvement over the original images.

    1. Page 197
      Abstract

      Two new families of algorithms for computing openings of binary and label images are presented in this paper. The first family of algorithms is based on an horizontal scan, and a vertical scan that takes the result of the horizontal scan as input. With the results of these two scans it is possible to compute an opening with respect to a rectangle of any size. The second family of algorithms is based on the construction of an exhaustive list of rectangles included in an object. Rectangles of this list all have a maximal extension, i.e. no larger rectangle included in a region contains them. The opening then results from filling the output image with rectangles larger than the structuring element. After a description of the algorithms we provide a comparison of several algorithms in terms of computation time efficiency. The comparison shows that some of the new algorithms advantageously compete with existing algorithms.

    2. Page 209
      Abstract
      Contact: bloomberg@ieee.org

      The efficiency of implementations of binary morphology is investigated, using both full image rasterops and word accumulation methods. All processing speeds are expressed in a way that is relatively independent of CPU speed and the sizes of both image and structuring element; namely, elementary pixel operations per CPU Hz (EPO/Hz). Options for handling boundary pixels are discussed. It is shown that use of successive full image rasterops is much slower than methods where the full structuring element is applied repeatedly to small parts of the image. Processing speeds of the former range from about 1 to 3 EPO/Hz, whereas the latter are typically between 4 and 7 times faster and range from 3 to 12 EPO/Hz. For small images using rasterops, vertical operations are about twice as fast as horizontal (3.2 vs 1.6 EPO/Hz); using word accumulation, vertical operations are only slightly faster than horizontal (12 vs 10 EPO/Hz). Performance on large images is reduced by a factor of between 2 and 4, due to slow reads and writes to main memory.

    3. Page 219
      Abstract

      This paper discusses the design of binary image operators from training data and its relation to Boolean function learning. An extended version of the incremental splitting of intervals (lSI) algorithm for Boolean function learning and some improvements and heuristics to reduce its processing time are proposed. Some examples illustrate the application of the algorithm.

    4. Page 229
      Abstract

      The classical morphological method to separate fused objects in binary images is to use the watershed transform on the complement of the distance transform of the binary image. This method assumes roughly disk-like objects and cannot separate objects when they are fused together beyond a certain point.

      In this paper we revisit the issue by assuming that fused objects are unions of ellipses rather than mere disks. The problem is recast in terms of finding the constituent primary grains given a boolean model of ellipses.

      To this end, we modify the well-known pseudo-Euclidean distance transform algorithm to generate arbitrary elliptical distance transforms to reduce the dimension of the problem and we present a goodness-of-fit measure that allows us to select ellipses.

      The results of the methods are given on both synthetic sample boolean models and real data.

    1. Page 241
      Abstract

      Scale spaces allow us to organize, compare and analyse differently sized structures of an object. In this work, we present and compare five ways of discretizing the Gaussian scale-space: sampling Gaussian distributions; recursively calculating Gaussian approximations; using Splines; approximating by first-order generators; and finally, by a new method we call “Crossed Convolutions”.

    2. Page 253
      Abstract

      This paper is concerned with the axiomatics of image scale-spaces. A major ingredient is the invariance under a family of scalings. A scaling can be defined explicitly, but also through a system of ODE’s. It is shown how existing PDE approaches for scale-spaces can be fit into the axiomatic framework, either directly or with the help of Lie group methods.

    3. Page 265
      Abstract
      correspondence: metzler@isip.mu-luebeck.de, Phone: +49 451 3909 559 (Fax: -555)

      Mathematical morphology interprets images as unions of individually shaped and superpositioned coherent structures. Within this framework, reconstruction filters preserve shapes of image structures that correspond to given base structures, while other structures are removed. This shape-selective transform enables consistent processing of visually perceivable image structures, e.g. of those image entities that have a semantic interpretation for a specific observer in a certain context. Such content-based processing can be regarded as being object-oriented. Also, it is commonly accepted that high-level image processing needs to consider appropriate ranges of scales. For these reasons, a morphological multiscale approach to object-oriented image analysis is established by a reconstructive scale-space preserving the inflection contours of an image, each of which represents a distinct object. Towards coarser scales the objects merge in a causal way and finally form a completely partitioned scale-space with an inclusion relation between fine and coarse objects. This object hierarchy is transformed into a relational database representing properties and inter-scale relations of independent objects by invariant attributes and inclusions, respectively. It serves as interface to a rule-based expert system that evaluates observer-specified queries automatically by selecting optimal sets of objects regarding the scale-behavior of their descriptive attributes. The object-oriented image analysis already yielded promising results for various clinical tasks concerning macroscopic and microscopic medical images.

    4. Page 277
      Abstract

      Signal simplification, as measured through a reduction in signal features is an important aspect of scale-space theories. Partial ordering can be seen as a formal characterization of the notion of a simpler version of a signal. The partial orderings implied by some scale-spaces which obey monotonicity of features are examined with examples drawn from I-D Gaussian scale-space and Multiscale Dilation Scale-Space. We hope to provide a unifying framework for such scale-spaces

    1. Page 283
      Abstract

      This report sheds some new light on an old and trusted tool in the image processing toolbox. The Kuwahara-Nagao operator is known as an edge preserving smoothing operator.

      This report shows that we don’t need to trust our intuition to verify this claim but that it follows from basic principles. To that end we will first cast the classical Kuwahara-Nagao operator into a more modern framework using Gaussian convolutions to calculate the mean and variance of local image patches and using morphological flow fields to select mean value of the neighborhood that has minimal variance.

      The main result of this report is the formulation of the PDE whose solution is equivalent to the iteration of a Kuwahara-Nagao operator using infinitesimally small neighborhoods. The resulting PDE is a combination of linear diffusion and morphological sharpening.

    2. Page 293
      Abstract

      In this paper, we do not aim at new applications or algorithms, but at a formalism as simple and as practical as possible to deal with the several useful concepts in image and shape analysis, namely : level sets of a function, reconstruction of a function from its level sets, monotone set operators, contrast invariant monotone image operators, threshold superposition principle, sup-inf operators and flat morphology, image operators commuting with thresholds.

      We prove that five slightly different terminologies or formalisms can be merged into a single simple presentation. Namely : the operators of “flat Mathematical Morphology”, the “contrast invariant image operators”, the “monotone set operators”, the “sup-inf” operators and finally the “contrast invariant image operators defined on continuous images” are fully equivalent. In this equivalence statement, set functions are defined from set operators by the threshold superposition principle and set operators are defined from contrast invariant operators by the so-called Evans-Spruck extension. All that we prove may be known in different contexts but has not been formalized, to our knowledge, in a simple unified format. The closest theory to what we present, in Mathematical Morphology, is in the abstract framework of complete lattices. We do not request any completeness requirement in what follows and the statements apply to operators defined on any set of functions or any set of sets. As illustration, we show how the unified formalism permits to define easily several image operators by giving their simpler set operator definition and conversely how we also easily deal with set operators defined from P.D.E.’s, as it occurs with geodesic snakes. The formal presentation of contrast invariant mathematical morphology given here will be developed in the book [5] in project.

    1. Page 305
      Abstract

      Multiscale methods which provide a decomposition of an image based on scale have many uses in image analysis. One class of such methods from mathematical morphology is based on size distributions or granulometries. In this paper a different type of image decomposition based on shape but not scale is proposed. Called a shape granulometry or shape distribution, it is built from a family of morphological thinnings, rather than openings as in the case of size distributions. An implementation based on scale invariant attribute thinnings is presented, and an example of an application is shown.

    2. Page 315
      Abstract

      A number of subjective and objective methods have been proposed, and are in use, to determine the roundness of sedimentary rock particles. Roundness, which is one of three properties describing the shape of a particle, is a measure of the extent to which the corners and edges of a particle have been worn away.

      The objective roundness measures are mainly based on the Fourier transform. In this paper granulometric methods are proposed as alternative objective methods. In fact, we show that even a single morphological opening with a disk of adequately chosen radius can provide a roundness measure that is highly correlated with the “objective” roundness, as provided by Krumbein’s chart [5]. In addition, we propose an efficient alternative method based on a linear opening of the polar representation of the boundary of the particles considered. In our experiments, both methods compared favorably with the Fourier transform method of Diepenbroek et al. [3].

    1. Page 327
      Abstract

      How to produce realizations of a spatial stochastic model when they are subject to a set of conditions or constraints? These realizations are called conditional simulations. This paper presents some statistical tools to carry out conditional simulation of spatial stochastic models. This includes regression techniques for the conditional simulation of gaussian random functions, Gibbs sampler for truncated gaussian or plurigaussian random functions, and Metropolis-Hasting algorithm and restriction of a Markov chain for the boolean model.

    2. Page 337
      Abstract

      In this paper, we call finite dynamical system (FDS) a discrete time and finite range dynamical system. A FDS is called a finite lattice dynamical system (FLDS) when its transition function is defined on a finite lattice. Hence, the transition function of a FLDS is a lattice operator and can be represented by an union of sup-generating operators, characterized by the operator basis. The identification of a FLDS consists in the determination of the transition function basis from samples of the system dynamics. In this paper, we introduce the notion of dynamical envelope constraint (i.e., a lower and upper limit for the dynamics of the system to be identified) and show how to apply it to improve the precision of the identification.

    3. Page 347
      Abstract

      Detection followed by morphological and non-linear processing is a typical operator sequence used in machine vision systems. Performance characterization of the sequence involves the derivation of the output image statistics as a function of the input image distribution and the operator tuning parameters. Such a characterization is critical to the task of automating the choice of tuning parameters in various applications. In this paper, we show how one can propagate the error in the detection stage specified in the form of probability of mis-detection and probability of false alarm through morphological operators. The detector output is viewed as a binary random series that is transformed to another binary random series by the morphological operation. An analytical derivation of the relationship between the input and output binary series is rather difficult to characterize. The main essence of the paper is the illustration that the segment and gap length statistics of the output binary series can be numerically computed for morphological filters by using embeddable Markov chains. Theoretical results are verified through simulations. Extensions of the theoretical analysis to handle 2-dimensional filters are also considered in the paper. It is shown that the theory can be modified to characterize 2D morphological filter outputs when the 2D structuring element is decomposable into ID elements.

    4. Page 359
      Abstract

      This paper discusses the inverse problem of the watershed. From a partition obtained by the watershed from markers, find the minimal set of markers that reproduces the same watershed partition. We present a solution based on the minimum spanning tree and introduce the concept of marker receptive region. We also review the watershed transform based on the minimum spanning forest problem. Two applications of this problem are presented: aid in assisted still image watershed segmentation, and evaluation of pre-processing filters.

    1. Page 369
      Abstract

      Non-separable vector levelings have the nice property that they are invariant under rotation of the coordinates axis. However, the calculation that derives from the definition is rather costly. In this paper we propose a simple algorithm for the calculation of such levelings.

    2. Page 379
      Abstract
      Corresponding author. E-mail: ccpgrrom@si.ehu.es

      Associative Morphological Memories are a recently proposed neural networks architecture based on the shift of the basic algebraic framework. They possess some robustness to specific noise models (erosive and dilative noise). Combining the Associative Morphological Memories with erosion/dilation scale-spaces, we achieved an increased robustness against noise. Here we report ongoing work on their application to the tasks of face localization in grayscale images and visual self-localization of a mobile robot.

    3. Page 389
      Abstract

      We present a new algorithm to compute the watershed transform on a massively parallel machine. Logic minimization, together with a novel approach to the notion of dynamics, allow the implementation of the watershed algorithm on a cellular automata grid, such as the programmable retina. We then investigate the profit acquired from computing some parts of the algorithm in an asynchronous way, in terms of time and energy consumption.

    1. Page 401
      Abstract

      Connected operators filter images by selectively altering the intensity of connected sets of pixels, without introducing new image contours. While in algebraic terms, connected operators are defined as operations in lattices of functions, and have generally been formulated on a deterministic basis, they also correspond to hierarchical agglomerative clustering algorithms, which are known to have well defined probabilistic formulations. In this paper, we argue that a probabilistic approach both generalizes existing connected operators and allows for new designs. In particular, we introduce an operator based on the development of models of visual similarity among image regions, and sequential Maximum a Posteriori (MAP) binary classification. Experimental results show that such operator is useful for content-based simplification of image collections and videos.

    2. Page 413
      Abstract

      In this paper, a class of extended lower and upper levelings is investigated. Here, some conditions are imposed to the criteria for building extended lower and upper levelings in order to generate openings and closings with reconstruction criteria. The idea of building these new openings and closings comes from the notions of morphological filters by reconstruction and levelings. The main goal in studying these transformations consists in eliminating some inconveniences of the morphological opening (closing) and the opening (closing) by reconstruction, and to show the interest in image segmentation. In fact, while the morphological opening (closing) modifies the remaining structures, the opening (closing) by reconstruction sometimes reconstructs undesirable regions.

    3. Page 425
      Abstract

      This paper deals with connected operators based on reconstruction process. New operators are proposed to perform size and motion simplification. In the case of size simplification, the proposed approach allows to better preserve the contrast of maxima or minima that have not been totally simplified by the filtering process. In the case of motion simplification, two applications are illustrated: motion detection for maxima/minima and sequence simplification to improve the coding efficiency of standard encoders.

  1. Page 443