Skip to main content

Learning Abductive and Nonmonotonic Logic Programs

  • Chapter
Abduction and Induction

Part of the book series: Applied Logic Series ((APLS,volume 18))

Abstract

We investigate the integration of induction and abduction in the context of logic programming. Our integration proceeds in a way that we learn theories for abductive logic programming (ALP) in the framework of inductive logic programming (ILP). Both ILP and ALP are important research areas in logic programming and AI. ILP provides theoretical frameworks and practical algorithms for inductive learning of relational descriptions in the form of logic programs (Muggleton, 1992; Lavrač and Džeroski, 1994; De Raedt, 1996). ALP, on the other hand, is usually considered as an extension of logic programming to deal with abduction so that incomplete information is represented and handled easily (Kakas et al., 1992). Learning abductive programs has also been proposed as an extension of previous work on ILP (Dimopoulos and Kakas, 1996b; Kakas and Riguzzi, 1997).1 The important question here is “how do we learn abductive theories?

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. While we adopted the answer set semantics for LELP, other semantics for ELPs may be applicable to our learning framework with minor modification. For example, Lamma et al. use a well-founded semantics for learning ELPs, and their output hypotheses are in a slightly different form from ours (Lamma et al.,1998).

    Google Scholar 

  2. In conventional machine learning methods, a search bias and a noise-handling mechanism are usually implemented to prevent the induced hypotheses from overfuting the given examples. See (Lavra6 and Dzeroski, 1994, Chapter 8) for an overview of mechanisms for handling imperfect data in ILP. These conventional approaches to noise handling can also be applied to the determination and the implementation of GenRules in learning positive or negative rules, e.g., (Srinivasan et al.,1992), in conjunction with our solutions. Since both positive and negative concepts are learned in our proposals, the use of parallel default rules and nondeterministic rules further minimizes the number of incorrectly classified training examples.

    Google Scholar 

  3. We can also consider another criteria for learning hierarchical default cancellation rules. For example, we can even produce nondeterministic rules at lower levels of the hierarchy.

    Google Scholar 

  4. The LELP2 algorithm in this chapter has been revised from the previous version in (Inoue and Kudoh, 1997). The previous algorithm produces rules deriving counter-examples by Counter in every level of the hierarchy, while such rules are added only once at the top level (Step 7 or 8) only when parallel default rules are not learned. Then, for Example 14.3, the resulting Rules now do not include the rule (- flies (D): - ab2 (D)), which is not necessary. This redundancy in the previous version was pointed out in (Lamma et al.,1998).

    Google Scholar 

  5. To avoid inductive leaps, some researchers propose a weak form of induction by applying CWA to BG U E through Clark’s completion, e.g., (De Raedt and Lavra6, 1993 ). However, as explained earlier, CWA is not appropriate in learning ELPs.

    Google Scholar 

Download references

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Inoue, K., Haneda, H. (2000). Learning Abductive and Nonmonotonic Logic Programs. In: Flach, P.A., Kakas, A.C. (eds) Abduction and Induction. Applied Logic Series, vol 18. Springer, Dordrecht. https://doi.org/10.1007/978-94-017-0606-3_14

Download citation

  • DOI: https://doi.org/10.1007/978-94-017-0606-3_14

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-90-481-5433-3

  • Online ISBN: 978-94-017-0606-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics