Skip to main content

Studies on CART’s Performance in Rule Induction and Comparisons by STRIM

In a Simulation Model for Data Generation and Verification of Induced Rules

  • Conference paper
  • First Online:
Rough Sets (IJCRS 2018)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11103))

Included in the following conference series:

Abstract

The tree based method is a conventional statistical method that involves constructing a tree structure for a classification model through recursively splitting a dataset by explanatory variables to minimize some impurity criteria for the response variable. This tree structure induces many if-then rules with product forms. In this paper, we study a basic tree based approach — the classification and regression trees (CART) method — based on a simulation model for data generation and verification for induced rules. We compare CART with the statistical test rule induction method (STRIM) to clarify its performance and problems. We also apply both methods to a real-world dataset and consider their performances based on the simulation results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Matsubayashi, T., Kato, Y., Saeki, T.: A new rule induction method from a decision table using a statistical test. In: Li, T., et al. (eds.) RSKT 2012. LNCS (LNAI), vol. 7414, pp. 81–90. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-31900-6_11

    Chapter  Google Scholar 

  2. Kato, Y., Saeki, T., Mizuno, S.: Studies on the necessary data size for rule induction by STRIM. In: Lingras, P., Wolski, M., Cornelis, C., Mitra, S., Wasilewski, P. (eds.) RSKT 2013. LNCS (LNAI), vol. 8171, pp. 213–220. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-41299-8_20

    Chapter  Google Scholar 

  3. Kato, Y., Saeki, T., Mizuno, S.: Considerations on rule induction procedures by STRIM and their relationship to VPRS. In: Kryszkiewicz, M., Cornelis, C., Ciucci, D., Medina-Moreno, J., Motoda, H., Raś, Z.W. (eds.) RSEISP 2014. LNCS (LNAI), vol. 8537, pp. 198–208. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-08729-0_19

    Chapter  Google Scholar 

  4. Saeki, T., Kato, Y., Mizuno, S.: Studies of rule induction by STRIM from the decision table with contaminated attribute values from missing data and noise – in the case of critical dataset size–. World Acad. Sci. Eng. Technol. Int. J. Comput. Electr. Autom. Control Inf. Eng. 19(6), 1244–1249 (2015)

    Google Scholar 

  5. Kato, Y., Saeki, T., Mizuno, S.: Proposal of a statistical test rule induction method by use of the decision table. Appl. Soft Comput. 28, 160–166 (2015)

    Article  Google Scholar 

  6. Kato, Y., Saeki, T., Mizuno, S.: Proposal for a statistical reduct method for decision tables. In: Ciucci, D., Wang, G., Mitra, S., Wu, W.-Z. (eds.) RSKT 2015. LNCS (LNAI), vol. 9436, pp. 140–152. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-25754-9_13

    Chapter  Google Scholar 

  7. Kitazaki, Y., Saeki, T., Kato, Y.: Performance comparison to a classification problem by the second method of quantification and STRIM. In: Flores, V., et al. (eds.) IJCRS 2016. LNCS (LNAI), vol. 9920, pp. 406–415. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-47160-0_37

    Chapter  Google Scholar 

  8. Fei, J., Saeki, T., Kato, Y.: Proposal for a new reduct method for decision tables and an improved STRIM. In: Tan, Y., Takagi, H., Shi, Y. (eds.) DMBD 2017. LNCS, vol. 10387, pp. 366–378. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-61845-6_37

    Chapter  Google Scholar 

  9. Kato, Y., Itsuno, T., Saeki, T.: Proposal of dominance-based rough set approach by STRIM and its applied example. In: Polkowski, L., et al. (eds.) IJCRS 2017. LNCS (LNAI), vol. 10313, pp. 418–431. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-60837-2_35

    Chapter  Google Scholar 

  10. Pawlak, Z.: Rough sets. Int. J. Inf. Comput. Sci. 11(5), 341–356 (1982)

    Article  Google Scholar 

  11. Skowron, A., Rauser, C.M.: The discernibility matrix and functions in information systems. In: Słowiński, R. (ed.) Intelligent Decision Support. Handbook of Application and Advances of Rough Set Theory, vol. 11, pp. 331–362. Kluwer Academic Publishers, Dordrecht (1992). https://doi.org/10.1007/978-94-015-7975-9_21

    Chapter  Google Scholar 

  12. Grzymala-Busse, J.W.: LERS – a system for learning from examples based on rough sets. In: Słowiński, R. (ed.) Intelligent Decision Support. Handbook of Applications and Advances of the Rough Sets Theory, vol. 11, pp. 3–18. Kluwer Academic Publishers, Dordrecht (1992). https://doi.org/10.1007/978-94-015-7975-9_1

    Chapter  Google Scholar 

  13. Ziarko, W.: Variable precision rough set model. J. Comput. Syst. Sci. 46, 39–59 (1993)

    Article  MathSciNet  Google Scholar 

  14. Brieman, L., Frieman, J.H., Olshen, R.A., Stone, C.J.: Classification and Regression Trees. Chapman & Hall, New York (1984)

    Google Scholar 

  15. Frieman, J.H.: Greedy function approximation: gradient boosting machine. Ann. Stat. 29, 1189–1232 (2001)

    Article  MathSciNet  Google Scholar 

  16. Brieman, L.: Bagging predictions. Mach. Learn. 26(2), 123–140 (1996)

    Google Scholar 

  17. Brieman, L.: Random forests. Mach. Learn. 45(1), 5–23 (2001)

    Article  Google Scholar 

  18. https://cran.r-project.org/web/packages/rpart/index.html

  19. http://rit.rakuten.co.jp/opendataj.html

  20. Zheng, Z., Wang, G., Wu, Y.: A rough set and rule tree based incremental knowledge acquisition algorithm. In: Wang, G., Liu, Q., Yao, Y., Skowron, A. (eds.) RSFDGrC 2003. LNCS (LNAI), vol. 2639, pp. 122–129. Springer, Heidelberg (2003). https://doi.org/10.1007/3-540-39205-X_16

    Chapter  Google Scholar 

  21. Sikder, I.U., Munakata, T.: Application of rough set and decision tree for characterization of premonitory factors of low seismic activity. Expert Syst. Appl. 36, 102–110 (2009)

    Article  Google Scholar 

  22. Buregwa-Czuma, S., Bazan, J.G., Bazan-Socha, S., Rzasa, W., Dydo, L., Skowron, A.: Resolving the conflicts between cuts in a decision tree with verifying cuts. In: Polkowski, L., et al. (eds.) IJCRS 2017. LNCS (LNAI), vol. 10314, pp. 403–422. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-60840-2_30

    Chapter  Google Scholar 

Download references

Acknowledgements

We truly thank Rakuten Inc. for presenting Rakuten Travel dataset [19].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuichi Kato .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kato, Y., Kawaguchi, S., Saeki, T. (2018). Studies on CART’s Performance in Rule Induction and Comparisons by STRIM. In: Nguyen, H., Ha, QT., Li, T., Przybyła-Kasperek, M. (eds) Rough Sets. IJCRS 2018. Lecture Notes in Computer Science(), vol 11103. Springer, Cham. https://doi.org/10.1007/978-3-319-99368-3_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-99368-3_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-99367-6

  • Online ISBN: 978-3-319-99368-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics