Skip to main content

The New Approach for Creating the Knowledge Base Using WikiPedia

  • Conference paper
  • First Online:
Second International Conference on Computer Networks and Communication Technologies (ICCNCT 2019)

Abstract

Wikipedia is recognized as one of the largest repositories in the Web. The term knowledge base was in connection with the expert systems as it is the part of Artificial Intelligence. A knowledge base can be created for any entity. The existing system like YAGO, MediaWiki tries to convert Wikipedia into a structured database to provide a vast knowledge base across the domains. It is very difficult to get the information which we want across the domains. So, the solution would be to get a systematic automated approach to build a knowledge base using Wikipedia on entity which we are interested in. The proposed system provides a knowledge base built upon the location as its entity. The system is feeded with seed data, by using these seed data it traverse through the Wikipedia graph and builds knowledge base using similarity measurement between seed data and traversed upcoming pages of wiki graph. Any expert AI systems uses gold standard knowledge base to take any decisions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Beevi, J. H., Deivasigamani, N.: A new approach to the design of knowledge base using XCLS clustering. In: Proceedings of IEEE International Conference on Pattern Recognition, Informatics and Medical Engineering, pp. 14–19 (2012)

    Google Scholar 

  2. Maree, M., Alhashmi, S.M., Belkhatir, M., Hawit, A.: Automatic construction of a domain-independent knowledge base from heterogeneous data sources. In: Proceedings of IEEE International Conference on Fuzzy Systems and Knowledge Discovery, pp. 1483–1488 (2012)

    Google Scholar 

  3. Nastase, V., Strube, M.: Transforming wikipedia into a large scale multilingual concept network, pp. 62–85. Elsevier (2012)

    Google Scholar 

  4. Ma, S.Q., Zhang, H.: Efficient manifold ranking for cross-media retrieval. In: 13th IEEE Conference on Industrial Electronics and Applications (ICIEA), pp. 335340 (2018)

    Google Scholar 

  5. Saad, S.M., Kamarudin, S.S.: Comparative analysis of similarity measures for sentence level semantic measurements of text. In: Proceedings of IEEE International Conference on Control System, Computing and Engineering, pp. 90–94 (2013)

    Google Scholar 

  6. Trisedya, B.D., Inastra, D.: Creating Indonesian-javanese parallel corpora using wikipedia articles. In: Proceedings of IEEE, pp. 239–245 (2014)

    Google Scholar 

  7. Effendi, J., Sakti, S., Nakamura, S.: Creation of a multi paraphrase corpus based on various elementary operations. In Proceedings of IEEE Conference of the Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Technique (O-COCOSDA), pp. 177–182 (2017)

    Google Scholar 

  8. Gupta, A., Goyal, A., Bindal, A., Gupta, A.: Meliorated approach for extracting bilingual terminology from wikipedia, world academy of science, engineering and technology. In: Proceedings of 11th IEEE International Conference on Computer and Information Technology, pp. 78–85 (2008)

    Google Scholar 

  9. Wang, S.Z., Zhang, Q.C., Zhang, L.: Natural language semantic corpus construction based on cloud service platform. In: Proceedings of IEEE International Conference on Machine Learning and Cybernetics, pp. 670–674 (2017)

    Google Scholar 

  10. Nothman, J., Ringland, N., Radford, W., Murphy, T., Curran, J.R.: Learning multilingual named entity recognition from Wikipedia, Elsevier, pp. 151175 (2012)

    Google Scholar 

  11. Wu, G., He, Y., Hu, X.: Entity linking: an issue to extract corresponding entity with knowledge base. In: Proceedings of IEEE, vol. 6, pp. 6220–6231 (2018)

    Google Scholar 

  12. Tehseen, M., Javed, H., Shah, I.H., Ahmed, S.: A light- weight key negotiation and authentication scheme for large scale WSNs. In: Recent Trends and Advances in Wireless and IoT-enabled Networks, pp. 225–235. Springer (2019)

    Google Scholar 

  13. Sun, S., Au, K.S., Li, Y., Barber, P.: Systems and Methods for Authentication. US Patent Application 16/159,235, filed 14 Feb (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Prasad E. Ganesh .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ganesh, P.E., Manjunath, H.R., Deepashree, V., Kavana, M.G., Raviraja (2020). The New Approach for Creating the Knowledge Base Using WikiPedia. In: Smys, S., Senjyu, T., Lafata, P. (eds) Second International Conference on Computer Networks and Communication Technologies. ICCNCT 2019. Lecture Notes on Data Engineering and Communications Technologies, vol 44. Springer, Cham. https://doi.org/10.1007/978-3-030-37051-0_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-37051-0_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-37050-3

  • Online ISBN: 978-3-030-37051-0

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics