Previous |  Up |  Next


Image segmentation; background subtraction; foreground detection; thresholding; computational efficiency; classification trees; classification accuracy
Many image segmentation algorithms have been proposed to partition an image into foreground regions of interest and background regions to be ignored. These algorithms use pixel intensities to partition the image, so it should be good practice to choose an appropriate background color as different as possible from the foreground one. In the case of a unique digitizing operation the user can make the choice of background color by himself in order to obtain a good result in the segmentation process, but in the case of several digitizing operations it would be useful to automate the whole process by removing any decision of the user about the choice of background color. Furthermore modern instruments allow capturing images with a high resolution characterized by a huge number of pixels, and pose speed problems to the image segmentation algorithms based on an idea of local thresholding. In this work an approach that adapts a widely used method for detecting moving objects from a video, called background subtraction (foreground detection), to the image segmentation framework is introduced. This approach combines local and global thresholding techniques to take advantage of the computational efficiency of the former and the accuracy of the latter. It provides good results in segmentation, and allows automating the process when foreground color of images is not constant, as well as speeding it up significantly. An application to the real data concerning botanical seeds is presented in order to compare, from a statistical perspective, the results derived from the proposed approach with those provided by standard image segmentation methods.
[1] Badekas, E., Papamarkos, N.: Automatic evaluation of document binarization results. In: Progress in pattern recognition, image analysis and applications. Springer, Heidelberg, 2005.
[2] Breiman, L., Friedman, J., Olshen, L., Stone, J.: Classification and Regression Trees. CRC Press, Boca Raton, FL, 1984. Zbl 0541.62042
[3] Chan, T., Shen, J.: Image Processing and Analysis. Stochastic Methods. SIAM, Philadelphia, PA, 2005. MR 2143289
[4] Crow, F.: Summed-area tables for texture mapping. In: SIGGRAPH ’84: Proceedings of the 11th annual conference on Computer graphics and interactive techniques, 1984, 207–212.
[5] Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction. 2nd ed., Springer, Heidelberg, 2009. MR 2722294 | Zbl 1273.62005
[6] Hunt, R. W. G.: The Reproduction of Colour. 6th ed., J. Wiley, Chichester, UK, 2004.
[7] Mola, F., Siciliano, R.: A fast splitting algorithm for classification trees. Statistics and Computing 7 (1997), 209–216. DOI 10.1023/A:1018590219790
[8] Munoz, X., Freixenet, J., Cufi, X., Mart, J.: Strategies for image segmentation combining region and boundary information. Pattern Recognition Letters 24 (2003), 375–392. DOI 10.1016/S0167-8655(02)00262-3
[9] Otsu, N.: A threshold selection method from gray-level histograms. IEEE Trans. on Systemn, Man and Cybernetics 9 (1979), 62–66. DOI 10.1109/TSMC.1979.4310076
[10] Padmavathi, G., Subashini, P., Sumi, A.: Empirical evaluation of suitable segmentation algorithms for IR images. IJCSI Int. J. of Computer Science Issues 7 (2010),
[11] Piccardi, M.: Background subtraction techniques: a review. IEEE Trans. on Systemn, Man and Cybernetics 4 (2004), 62–3104.
[12] Sauvola, J., Pietikäinen, M.: Adaptive document image binarization. Pattern Recognition 33 (2000), 225–236. DOI 10.1016/S0031-3203(99)00055-2
[13] Shafait, F., Keysers, D., Breuel, T. M.: Efficient implementation of local adaptive thresholding techniques using integral images. In: Electronic Imaging 2008, International Society for Optics and Photonics, 2008.
[14] Shapiro, L. G., Stockman, G. C.: Computer Vision. Prentice-Hall, New Jersey, 2001.
[15] Šonka, M., Hlaváč, V., Boyle, R.: Image Processing, Analysis, and Machine Vision. 4th ed., Cengage Learning, UK, 2014.
Partner of
EuDML logo