Skip to main content

TKGFrame: A Two-Phase Framework for Temporal-Aware Knowledge Graph Completion

  • Conference paper
  • First Online:
Web and Big Data (APWeb-WAIM 2020)

Abstract

In this paper, we focus on temporal-aware knowledge graph (TKG) completion, which aims to automatically predict missing links in a TKG by making inferences from the existing temporal facts and the temporal information among the facts. Existing methods conducted on this task mainly focus on modeling temporal ordering of relations contained in the temporal facts to learn the low-dimensional vector space of TKG. However, these models either ignore the evolving strength of temporal ordering relations in the structure of relational chain, or discard more consideration to the revision of candidate prediction results produced by the TKG embeddings. To address these two limitations, we propose a novel two-phase framework called TKGFrame to boost the final performance of the task. Specifically, TKGFrame employs two major models. The first one is a relation evolving enhanced model to enhance evolving strength representations of pairwise relations pertaining to the same relational chain, resulting in more accurate TKG embeddings. The second one is a refinement model to revise the candidate predictions from the embeddings and further improve the performance of predicting missing temporal facts via solving a constrained optimization problem. Experiments conducted on three popular datasets for entity prediction and relation prediction demonstrate that TKGFrame achieves more accurate prediction results as compared to several state-of-the-art baselines.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    KG completion, as known as link prediction in KG, aims to automatically predict missing links between entities based on known facts involved in KG.

  2. 2.

    The experimental details and source code of the model are publicly available at https://github.com/zjs123/TKGComplt.

  3. 3.

    The relational chain can be constructed by connecting temporal relations sharing the same head entity ranked by an order of their timestamps.

  4. 4.

    https://pypi.python.org/pypi/PuLP.

  5. 5.

    The code for TransE and TransH is from https://github.com/thunlp/OpenKE.

  6. 6.

    The code for TTransE is from https://github.com/INK-USC/RE-Net/tree/master/baselines.

  7. 7.

    The code for TA-TransE is from https://github.com/nle-ml/mmkb.

  8. 8.

    The code for HyTE is from https://github.com/malllabiisc/HyTE.

  9. 9.

    We train TransE and TransH on all datasets with embedding dimension \(\textit{d}\) = 100, margin \(\gamma \) = 1.0, learning rate \(l = 10^{-3}\) and taking \(\textit{l}_{1}\)-norm. The configuration of TAE-TransE and TAE-TransH are set as embedding dimension \(\textit{d}\) = 100, margin \(\gamma _{1}\) = \(\gamma _{2}\) = 4, learning rate \(l= 10^{-4}\), regularization hyperparameter \(t = 10^{-3}\) and taking \(\textit{l}_{1}\)-norm for YAGO11k and Wikidata12k datasets, and \(\textit{d}\) = 100, \(\gamma _{1}\) = \(\gamma _{2}\) = 2, \(l = 10^{-5}\), \(t = 10^{-3}\), taking \(\textit{l}_{1}\)-norm for Wikidata11k. We train TA-TransE and TTransE with the same parameter setting as introduced in  [11]. For TA-TransE model, the configuration are embedding dimension \(\textit{d}\) = 100, margin \(\gamma \) = 1, batch size bs = 512, learning rate \(l = 10^{-4}\) and taking \(\textit{l}_{1}\)-norm for all the datasets. For HyTE, we initialize the same parameter setting as HyTE, in which embedding dimension \(\textit{d}\) = 128, margin \(\gamma \) = 10, learning rate \(l = 10^{-5}\), negative sampling ratio n = 5 and using \(\textit{l}_{1}\)-norm for all the datasets.

References

  1. Barbosa, D., Wang, H., Yu, C.: Shallow information extraction for the knowledge web. In: ICDE, pp. 1264–1267 (2013)

    Google Scholar 

  2. Bollacker, K., Evans, C., Paritosh, P., Sturge, T., Taylor, J.: Freebase: a collaboratively created graph database for structuring human knowledge. In: SIGMOD, pp. 1247–1250 (2008)

    Google Scholar 

  3. Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., Yakhnenko, O.: Translating embeddings for modeling multi-relational data. In: NIPS, pp. 2787–2795 (2013)

    Google Scholar 

  4. Clarke, J., Lapata, M.: Global inference for sentence compression: an integer linear programming approach. J. Artif. Intell. Res. 31, 399–429 (2008)

    Article  Google Scholar 

  5. Dasgupta, S.S., Ray, S.N., Talukdar, P.: Hyte: hyperplane-based temporally aware knowledge graph embedding. In: EMNLP, pp. 2001–2011 (2018)

    Google Scholar 

  6. Dong, L., Wei, F., Zhou, M., Xu, K.: Question answering over freebase with multi-column convolutional neural networks. In: ACL-IJCNLP (vol. 1: Long Papers), pp. 260–269 (2015)

    Google Scholar 

  7. Erxleben, F., Günther, M., Krötzsch, M., Mendez, J., Vrandečić, D.: Introducing wikidata to the linked data web. In: Mika, P., et al. (eds.) ISWC 2014. LNCS, vol. 8796, pp. 50–65. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-11964-9_4

    Chapter  Google Scholar 

  8. García-Durán, A., Dumančić, S., Niepert, M.: Learning sequence encoders for temporal knowledge graph completion (2018). https://arxiv.org/abs/1809.03202

  9. Jiang, T., et al.: Towards time-aware knowledge graph completion. In: COLING, pp. 1715–1724 (2016)

    Google Scholar 

  10. Jiang, T., et al.: Encoding temporal information for time-aware link prediction. In: EMNLP, pp. 2350–2354 (2016)

    Google Scholar 

  11. Jin, W., et al.: Recurrent event network: global structure inference over temporal knowledge graph (2019). https://arxiv.org/abs/1904.05530

  12. Leblay, J., Chekol, M.W.: Deriving validity time in knowledge graph. In: WWW, pp. 1771–1776 (2018)

    Google Scholar 

  13. Lehmann, J., et al.: DBpedia-a large-scale, multilingual knowledge base extracted from Wikipedia. Semant. Web 6(2), 167–195 (2015)

    Article  Google Scholar 

  14. Lin, Y., Liu, Z., Sun, M., Liu, Y., Zhu, X.: Learning entity and relation embeddings for knowledge graph completion. In: AAAI, pp. 2081–287 (2015)

    Google Scholar 

  15. Mahdisoltani, F., Biega, J., Suchanek, F.M.: Yago3: a knowledge base from multilingual wikipedias. In: CIDR (2013)

    Google Scholar 

  16. Nickel, M., Tresp, V., Kriegel, H.P.: A three-way model for collective learning on multi-relational data. In: ICML, vol. 11, pp. 809–816 (2011)

    Google Scholar 

  17. Suchanek, F.M., Kasneci, G., Weikum, G.: Yago: a core of semantic knowledge. In: WWW, pp. 697–706 (2007)

    Google Scholar 

  18. Sun, Z., Hu, W., Zhang, Q., Qu, Y.: Bootstrapping entity alignment with knowledge graph embedding. In: IJCAI, pp. 4396–4402 (2018)

    Google Scholar 

  19. Trivedi, R., Dai, H., Wang, Y., Song, L.: Know-evolve: deep temporal reasoning for dynamic knowledge graphs. In: ICML, vol. 70, pp. 3462–3471 (2017)

    Google Scholar 

  20. Wang, Z., Zhang, J., Feng, J., Chen, Z.: Knowledge graph embedding by translating on hyperplanes. In: AAAI, pp. 1112–1119 (2014)

    Google Scholar 

  21. Xiong, C., Callan, J.: Query expansion with freebase. In: ICTIR, pp. 111–120 (2015)

    Google Scholar 

  22. Zhou, X., Zhu, Q., Liu, P., Guo, L.: Learning knowledge embeddings by combining limit-based scoring loss. In: CIKM, pp. 1009–1018 (2017)

    Google Scholar 

Download references

Acknowledgments

This work was supported by Major Scientific and Technological Special Project of Guizhou Province (No. 20183002), Sichuan Science and Technology Program (No. 2020YFS0057, No. 2020YJ0038 and No. 2019YFG0535), Fundamental Research Funds for the Central Universities (No. ZYGX2019Z015) and Dongguan Songshan Lake Introduction Program of Leading Innovative and Entrepreneurial Talents. Yongpan Sheng’s research was supported by the National Key Research and Development Project (No. 2018YFB2101200).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yongpan Sheng .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, J., Sheng, Y., Wang, Z., Shao, J. (2020). TKGFrame: A Two-Phase Framework for Temporal-Aware Knowledge Graph Completion. In: Wang, X., Zhang, R., Lee, YK., Sun, L., Moon, YS. (eds) Web and Big Data. APWeb-WAIM 2020. Lecture Notes in Computer Science(), vol 12317. Springer, Cham. https://doi.org/10.1007/978-3-030-60259-8_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-60259-8_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-60258-1

  • Online ISBN: 978-3-030-60259-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics