Previous |  Up |  Next

Article

Title: Neuromorphic features of probabilistic neural networks (English)
Author: Grim, Jiří
Language: English
Journal: Kybernetika
ISSN: 0023-5954
Volume: 43
Issue: 5
Year: 2007
Pages: 697-712
Summary lang: English
.
Category: math
.
Summary: We summarize the main results on probabilistic neural networks recently published in a series of papers. Considering the framework of statistical pattern recognition we assume approximation of class-conditional distributions by finite mixtures of product components. The probabilistic neurons correspond to mixture components and can be interpreted in neurophysiological terms. In this way we can find possible theoretical background of the functional properties of neurons. For example, the general formula for synaptical weights provides a statistical justification of the well known Hebbian principle of learning. Similarly, the mean effect of lateral inhibition can be expressed by means of a formula proposed by Perez as a measure of dependence tightness of involved variables. (English)
Keyword: probabilistic neural networks
Keyword: distribution mixtures
Keyword: sequential EM algorithm
Keyword: pattern recognition
MSC: 62H30
MSC: 62M45
MSC: 62P10
MSC: 68T10
MSC: 92B20
idZBL: Zbl 1134.62067
idMR: MR2376332
.
Date available: 2009-09-24T20:28:22Z
Last updated: 2012-06-06
Stable URL: http://hdl.handle.net/10338.dmlcz/135807
.
Reference: [1] Bialasiewicz J.: Statistical data reduction via construction of sample space partitions.Kybernetika 6 (1970), 6, 371–379 Zbl 0218.94005, MR 0283910
Reference: [2] Dempster A. P., Laird N. M., Rubin D. B.: Maximum-likelihood from incomplete data via the EM algorithm.J. Royal Statist. Soc. B 39 (1977), 1–38 Zbl 0364.62022, MR 0501537
Reference: [3] Grim J.: On numerical evaluation of maximum-likelihood estimates for finite mixtures of distributions.Kybernetika 18 (1982), 3, 173–190 Zbl 0489.62028, MR 0680154
Reference: [4] Grim J.: Design and optimization of multilevel homogeneous structures for multivariate pattern recognition.In: Fourth FORMATOR Symposium 1982, Academia, Prague 1982, pp. 233–240 MR 0726960
Reference: [5] Grim J.: Multivariate statistical pattern recognition with non-reduced dimensionality.Kybernetika 22 (1986), 6, 142–157
Reference: [6] Grim J.: Maximum-likelihood design of layered neural networks.In: Proc. Internat. Conference Pattern Recognition. IEEE Computer Society Press, Los Alamitos 1996, pp. 85–89
Reference: [7] Grim J.: Design of multilayer neural networks by information preserving transforms.In: Third European Congress on Systems Science (E. Pessa, M. P. Penna, and A. Montesanto, eds.). Edizioni Kappa, Roma 1996, pp. 977–982
Reference: [8] Grim J.: Information approach to structural optimization of probabilistic neural networks.In: Fourth European Congress on Systems Science (L. Ferrer and A. Caselles, eds.). SESGE, Valencia 1999, pp. 527–539
Reference: [9] Grim J.: Discretization of probabilistic neural networks with bounded information loss.In: Computer–Intensive Methods in Control and Data Processing. (Preprints of the 3rd European IEEE Workshop CMP’98, Prague 1998, J. Rojicek et al., eds.), ÚTIA AV ČR, Prague 1998, pp. 205–210
Reference: [10] Grim J.: A sequential modification of EM algorithm.In: Proc. Classification in the Information Age (W. Gaul and H. Locarek-Junge, eds., Studies in Classification, Data Analysis, and Knowledge Organization), Springer, Berlin 1999, pp. 163–170
Reference: [11] J. J. Grim: Self-organizing maps and probabilistic neural networks.Neural Network World 10 (2000), 3, 407–415
Reference: [12] Grim J.: Probabilistic Neural Networks (in Czech).In: Umělá inteligence IV. (V. Mařík, O. Štěpánková, and J. Lažanský, eds.), Academia, Praha 2003, pp. 276–312
Reference: [13] Grim J., Just, P., Pudil P.: Strictly modular probabilistic neural networks for pattern recognition.Neural Network World 13 (2003), 6, 599–615
Reference: [14] Grim J., Kittler J., Pudil, P., Somol P.: Combining multiple classifiers in probabilistic neural networks.In: Multiple Classifier Systems (Lecture Notes in Computer Science 1857, J. Kittler and F. Roli, eds.). Springer, Berlin 2000, pp. 157–166
Reference: [15] Grim J., Kittler J., Pudil, P., Somol P.: Information analysis of multiple classifier fusion.In: Multiple Classifier Systems 2001 (Lecture Notes in Computer Science 2096, J. Kittler and F. Roli, eds.). Springer, Berlin – New York 2001, pp. 168–177 Zbl 0987.68898, MR 2043268
Reference: [16] Grim J., Kittler J., Pudil, P., Somol P.: Multiple classifier fusion in probabilistic neural networks.Pattern Analysis & Applications 5 (2002), 7, 221–233 Zbl 1021.68079, MR 1930448
Reference: [17] Grim J., Pudil, P., Somol P.: Recognition of handwritten numerals by structural probabilistic neural networks.In: Proc. Second ICSC Symposium on Neural Computation (H. Bothe and R. Rojas, eds.). ICSC, Wetaskiwin 2000, pp. 528–534
Reference: [18] Grim J., Pudil, P., Somol P.: Boosting in probabilistic neural networks.In: Proc. 16th International Conference on Pattern Recognition (R. Kasturi, D. Laurendeau and C. Suen, eds.). IEEE Computer Society, Los Alamitos 2002, pp. 136–139
Reference: [19] Grim J., Somol P., Pudil, P., Just P.: Probabilistic neural network playing a simple game.In: Artificial Neural Networks in Pattern Recognition (S. Marinai and M. Gori, eds.). University of Florence, Florence 2003, pp. 132–138
Reference: [20] Grim J., Somol, P., Pudil P.: Probabilistic neural network playing and learning Tic-Tac-Toe.Pattern Recognition Letters, Special Issue 26 (2005), 12, 1866–1873
Reference: [21] Haykin S.: Neural Networks: A Comprehensive Foundation.Morgan Kaufman, San Mateo 1993 Zbl 0934.68076
Reference: [22] McLachlan G. J., Peel D.: Finite Mixture Models.Wiley, New York – Toronto 2000 Zbl 0963.62061, MR 1789474
Reference: [23] Perez A.: Information, $\varepsilon $-sufficiency and data reduction problems.Kybernetika 1 (1965), 4, 297–323 MR 0205410
Reference: [24] Perez A.: $\varepsilon $-admissible simplification of the dependence structure of a set of random variables.Kybernetika 13 (1977), 6, 439–449 MR 0472224
Reference: [25] Schlesinger M. I.: Relation between learning and self-learning in pattern recognition (in Russian).Kibernetika (1968), 6, 81–88
Reference: [26] Specht D. F.: Probabilistic neural networks for classification, mapping or associative memory.In: Proc. IEEE Internat. Conference on Neural Networks 1988, Vol. I, pp. 525–532
Reference: [27] Streit L. R., Luginbuhl T. E.: Maximum-likelihood training of probabilistic neural networks.IEEE Trans. Neural Networks 5 (1994), 764–783
Reference: [28] Vajda I., Grim J.: About the maximum information and maximum likelihood principles in neural networks.Kybernetika 34 (1998), 4, 485–494 MR 0359208
Reference: [29] Watanabe S., Fukumizu K.: Probabilistic design of layered neural networks based on their unified framework.IEEE Trans. Neural Networks 6 (1995), 3, 691–702
Reference: [30] Xu L., Jordan M. I.: On convergence properties of the EM algorithm for Gaussian mixtures.Neural Computation 8 (1996), 129–151
.

Files

Files Size Format View
Kybernetika_43-2007-5_8.pdf 696.8Kb application/pdf View/Open
Back to standard record
Partner of
EuDML logo