Abstract
This paper considers an alternative activation function for use with MLP networks. The performance on parity problems is considered and it has been found that only n — 1 hidden units were needed to resolve the n-bit problem. Also, insight has been gained into the families of network parameters generated. Use as the kernel of a support vector machine for particular problems is anticipated.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Menoll A. Mehrotra K, Mohan C, Ranka S: Characterization of a Class of Sigmoid Functions with Applications to Neural Networks. Neural Networks 9, No 5, pp 818–835, (1996).
Minai A Williams R: On the Derivatives of the Sigmoid. Neural Networks 6, pp 845–853, (1993).
Reeves C. R. Johnston C.: Fitting densities and hazard functions with neural networks. Accepted by ICANNGA 2001.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Wien
About this paper
Cite this paper
Steele, N.C., Reeves, C.R., Gaura, E.I. (2001). Activation Functions. In: Kůrková, V., Neruda, R., Kárný, M., Steele, N.C. (eds) Artificial Neural Nets and Genetic Algorithms. Springer, Vienna. https://doi.org/10.1007/978-3-7091-6230-9_5
Download citation
DOI: https://doi.org/10.1007/978-3-7091-6230-9_5
Publisher Name: Springer, Vienna
Print ISBN: 978-3-211-83651-4
Online ISBN: 978-3-7091-6230-9
eBook Packages: Springer Book Archive