Skip to main content

SPAARC: A Fast Decision Tree Algorithm

  • Conference paper
  • First Online:
Data Mining (AusDM 2018)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 996))

Included in the following conference series:

Abstract

Decision trees are a popular method of data-mining and knowledge discovery, capable of extracting hidden information from datasets consisting of both nominal and numerical attributes. However, their need to test the suitability of every attribute at every tree node, in addition to testing every possible split-point for every numerical attribute can be expensive computationally, particularly for datasets with high dimensionality. This paper proposes a method for speeding up the decision tree induction process called SPAARC, consisting of two components to address these issues – sampling of the numeric attribute tree-node split-points and dynamically adjusting the node attribute selection space. Further, these methods can be applied to almost any decision tree algorithm. To confirm its validity, SPAARC has been tested and compared against an implementation of the CART algorithm using 18 freely-available datasets from the UCI data repository. Results from this testing indicate the two components of SPAARC combined have minimal effect on decision tree classification accuracy yet reduce model build times by as much as 69%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Islam, M.Z., Furner, M., Siers, M.J.: WaterDM: a knowledge discovery and decision support tool for efficient dam management (2016)

    Google Scholar 

  2. Dangare, C.S., Apte, S.S.: Improved study of heart disease prediction system using data mining classification techniques. Int. J. Comput. Appl. 47(10), 44–48 (2012)

    Google Scholar 

  3. Han, J., Pei, J., Kamber, M.: Data Mining: Concepts and Techniques. Elsevier, New York (2011)

    MATH  Google Scholar 

  4. Breiman, L., Friedman, J., Stone, C.J., Olshen, R.A.: Classification and Regression Trees. CRC Press, Boca Raton (1984)

    MATH  Google Scholar 

  5. Quinlan, J.R.: C4.5: Programs for Machine Learning. Elsevier, New York (2014)

    Google Scholar 

  6. Nath, S.: ACE: exploiting correlation for energy-efficient and continuous context sensing. In: Proceedings of the 10th International Conference on Mobile Systems, Applications, and Services ACM (2012)

    Google Scholar 

  7. Srinivasan, V., Moghaddam, S., Mukherji, A., Rachuri, K.K., Xu, C., Tapia, E.M.: MobileMiner: mining your frequent patterns on your phone. In: Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing ACM (2014)

    Google Scholar 

  8. Hinwood, A., Preston, P., Suaning, G., Lovell, N.: Bank note recognition for the vision impaired. Australas. Phys. Eng. Sci. Med. 29(2), 229 (2006)

    Article  Google Scholar 

  9. Maurer, U., Smailagic, A., Siewiorek, D.P., Deisher, M.: Activity recognition and monitoring using multiple sensors on different body positions. In: International Workshop on Wearable and Implantable Body Sensor Networks (BSN 2006) IEEE (2006)

    Google Scholar 

  10. Darrow, B.: Amazon just made a huge change to its cloud pricing. http://fortune.com/2017/09/18/amazon-cloud-pricing-second/ Accessed 30 June 2018

  11. Fayyad, U.M., Irani, K.B.: On the handling of continuous-valued attributes in decision tree generation. Mach. Learn. 8(1), 87–102 (1992)

    MATH  Google Scholar 

  12. Ranka, S., Singh, V.: CLOUDS: a decision tree classifier for large datasets. In: Proceedings of the 4th Knowledge Discovery and Data Mining Conference (1998)

    Google Scholar 

  13. Chandrashekar, G., Sahin, F.: A survey on feature selection methods. Comput. Electr. Eng. 40(1), 16–28 (2014)

    Article  Google Scholar 

  14. Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. J. Mach. Learn. Res. 3(Mar), 1157–1182 (2003)

    MATH  Google Scholar 

  15. Dheeru, D., Karra Taniskidou, E.: UCI machine learning repository. https://archive.ics.uci.edu/ml/datasets.html Accessed 12 Aug 2018

  16. Buntine, W., Niblett, T.: A further comparison of splitting rules for decision-tree induction. Mach. Learn. 8(1), 75–85 (1992)

    Google Scholar 

Download references

Acknowledgements

This research is supported by an Australian Government Research Training Program (RTP) scholarship.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Darren Yates .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yates, D., Islam, M.Z., Gao, J. (2019). SPAARC: A Fast Decision Tree Algorithm. In: Islam, R., et al. Data Mining. AusDM 2018. Communications in Computer and Information Science, vol 996. Springer, Singapore. https://doi.org/10.1007/978-981-13-6661-1_4

Download citation

  • DOI: https://doi.org/10.1007/978-981-13-6661-1_4

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-13-6660-4

  • Online ISBN: 978-981-13-6661-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics