Skip to main content

Learning Parameters in Directed Evidential Networks with Conditional Belief Functions

  • Conference paper
Belief Functions: Theory and Applications (BELIEF 2014)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 8764))

Included in the following conference series:

Abstract

Directed evidential networks with conditional belief functions are one of the most commonly used graphical models for analyzing complex systems and handling different types of uncertainty. A crucial step to benefit from the reasoning process in these models is to quantify them. So, we address, in this paper, the issue of estimating parameters in evidential networks from evidential databases, by applying the maximum likelihood estimation generalized to the evidence theory framework.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bach Tobji, M.A., Ben Yaghlane, B., Mellouli, K.: A new algorithm for mining frequent itemsets from evidential databases. In: Proceeding of Information Processing and Management of Uncertainty (IPMU 2008), Malaga, Spain, pp. 1535–1542 (2008)

    Google Scholar 

  2. Yaghlane, B.B., Mellouli, K.: Updating directed belief networks. In: Hunter, A., Parsons, S. (eds.) ECSQARU 1999. LNCS (LNAI), vol. 1638, pp. 43–54. Springer, Heidelberg (1999)

    Chapter  Google Scholar 

  3. Ben Yaghlane, B., Mellouli, K.: Inference in Directed Evidential Networks Based on the Transferable Belief Model. IJAR 48(2), 399–418 (2008)

    MathSciNet  MATH  Google Scholar 

  4. Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum Likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, SERIES B 39, 1–38 (1977)

    MathSciNet  MATH  Google Scholar 

  5. Denœux, T.: Maximum Likelihood Estimation from Uncertain Data in the Belief Function Framework. Knowledge and Data Engineering 25, 119–113 (2013)

    Google Scholar 

  6. Jordan, M.: Learning in Graphical Models. Kluwer Academic Publisher (1998)

    Google Scholar 

  7. Krause, P.J.: Learning probabilistic networks. The Knowledge Engineering Review 13(4), 321–351 (1998)

    Article  Google Scholar 

  8. Laâmari, W., Ben Yaghlane, B.: Reasoning in Singly-Connected Directed Evidential Networks with Conditional Beliefs. In: Likas, A., Blekas, K., Kalles, D. (eds.) SETN 2014. LNCS, vol. 8445, pp. 221–236. Springer, Heidelberg (2014)

    Chapter  Google Scholar 

  9. Lauritzen, S.L., Spiegelhalter, D.J.: Local computation with probabilities and graphical structures and their application to expert systems. J. Royal Statistical Society B 50, 157–224 (1988)

    MathSciNet  MATH  Google Scholar 

  10. Murphy, K.: Probabilistic Graphical Models. Michael Jordan (2002)

    Google Scholar 

  11. Myung, I.J.: Tutorial on maximum likelihood estimation. Journal of Mathematical Psycology 47, 90–100 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  12. Naim, P., Wuillemin, P.H., Leray, P., Pourret, O., Becker, A.: Réseaux Bayésiens. Eyrolles (2004)

    Google Scholar 

  13. Shafer, G.: A Mathematical Theory of Evidence. Princeton Univ. Press, Princeton (1976)

    MATH  Google Scholar 

  14. Smets, P.: Jeffrey’s rule of conditioning generalized to belief functions. In: Proceedings of the Ninth international conference on Uncertainty in artificial intelligence (UAI 1993), Washington, DC, USA, pp. 500–505 (1993)

    Google Scholar 

  15. Smets, P., Kennes, R.: The transferable belief model. Artificial. Intelligence 66, 191–234 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  16. Tang, Y., Zheng, J.: Dempster Conditioning and Conditional Independence in Evidence Theory. In: Zhang, S., Jarvis, R.A. (eds.) AI 2005. LNCS (LNAI), vol. 3809, pp. 822–825. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  17. Vannoorenberghe, P., Smets, P.: Partially supervised learning by a credal EM approach. In: Godo, L. (ed.) ECSQARU 2005. LNCS (LNAI), vol. 3571, pp. 956–967. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  18. Xu, H., Smets, P.: Evidential Reasoning with Conditional Belief Functions. In: Heckerman, D., et al. (eds.) Proceedings of Uncertainty in Artificial Intelligence (UAI 1994), Seattle, Washington, USA, pp. 598–606 (1994)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Ben Hariz, N., Ben Yaghlane, B. (2014). Learning Parameters in Directed Evidential Networks with Conditional Belief Functions. In: Cuzzolin, F. (eds) Belief Functions: Theory and Applications. BELIEF 2014. Lecture Notes in Computer Science(), vol 8764. Springer, Cham. https://doi.org/10.1007/978-3-319-11191-9_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-11191-9_32

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-11190-2

  • Online ISBN: 978-3-319-11191-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics