Skip to main content

Improving Neural Network Classifier Using Gradient-Based Floating Centroid Method

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2019)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1143))

Included in the following conference series:

Abstract

Floating centroid method (FCM) offers an efficient way to solve a fixed-centroid problem for the neural network classifiers. However, evolutionary computation as its optimization method restrains the FCM to achieve satisfactory performance for different neural network structures, because of the high computational complexity and inefficiency. Traditional gradient-based methods have been extensively adopted to optimize the neural network classifiers. In this study, a gradient-based floating centroid (GDFC) method is introduced to address the fixed centroid problem for the neural network classifiers optimized by gradient-based methods. Furthermore, a new loss function for optimizing GDFC is introduced. The experimental results display that GDFC obtains promising classification performance than the comparison methods on the benchmark datasets.

M. Islam and S. Liu—Both authors contribute equally to this article.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bridle, J.S.: Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition. In: Soulié, F.F., Hérault, J. (eds.) Neurocomputing. NATO ASI Series, vol. 68, pp. 227–236. Springer, Heidelberg (1990). https://doi.org/10.1007/978-3-642-76153-9_28

    Chapter  Google Scholar 

  2. Dietterich, T.G., Bakiri, G.: Solving multiclass learning problems via error-correcting output codes. J. Artif. Int. Res. 2(1), 263–286 (1995)

    MATH  Google Scholar 

  3. Jiang, G., He, H., Yan, J., Xie, P.: Multiscale convolutional neural networks for fault diagnosis of wind turbine gearbox. IEEE Trans. Ind. Electron. PP, 1 (2018)

    Google Scholar 

  4. Kamilaris, A., Prenafeta-Bold, F.X.: A review of the use of convolutional neural networks in agriculture. J. Agric. Sci. 156(3), 312–322 (2018). https://doi.org/10.1017/S0021859618000436

    Article  Google Scholar 

  5. Nazari, M., Oroojlooy, A., Snyder, L., Takac, M.: Reinforcement learning for solving the vehicle routing problem. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 31, pp. 9839–9849. Curran Associates, Inc. (2018)

    Google Scholar 

  6. Wang, L., Yang, B., Chen, Y., Zhang, X., Orchard, J.: Improving neural-network classifiers using nearest neighbor partitioning. IEEE Trans. Neural Netw. Learn. Syst. 28(10), 2255–2267 (2017)

    Article  MathSciNet  Google Scholar 

  7. Wang, L., et al.: Improvement of neural network classifier using floating centroids. Knowl. Inf. Syst. 31(3), 433–454 (2012)

    Article  Google Scholar 

  8. Wang, L., Yang, B., Chen, Z., Abraham, A., Peng, L.: A novel improvement of neural network classification using further division of partition space. In: Mira, J., Álvarez, J.R. (eds.) IWINAC 2007. LNCS, vol. 4527, pp. 214–223. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-73053-8_21

    Chapter  Google Scholar 

  9. Wibowo, A., Wiryawan, P.W., Nuqoyati, N.I.: Optimization of neural network for cancer microRNA biomarkers classification. J. Phys: Conf. Ser. 1217, 012124 (2019)

    Google Scholar 

  10. Wong, Y.J., Arumugasamy, S.K., Jewaratnam, J.: Performance comparison of feedforward neural network training algorithms in modeling for synthesis of polycaprolactone via biopolymerization. Clean Technol. Environ. Policy 20(9), 1971–1986 (2018)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by National Natural Science Foundation of China under Grant No. 61872419, No. 61573166, No. 61572230, No. 61873324, No. 81671785, No. 61672262. Shandong Provincial Natural Science Foundation No. ZR2019MF040, No. ZR2018LF005. Shandong Provincial Key R&D Program under Grant No. 2019GGX101041, No. 2018GGX101048, No. 2016ZDJS01A12, No. 2016GGX101001, No. 2017CXZC1206. Taishan Scholar Project of Shandong Province, China, under Grant No. tsqn201812077.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lin Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Islam, M., Liu, S., Zhang, X., Wang, L. (2019). Improving Neural Network Classifier Using Gradient-Based Floating Centroid Method. In: Gedeon, T., Wong, K., Lee, M. (eds) Neural Information Processing. ICONIP 2019. Communications in Computer and Information Science, vol 1143. Springer, Cham. https://doi.org/10.1007/978-3-030-36802-9_45

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-36802-9_45

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-36801-2

  • Online ISBN: 978-3-030-36802-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics