Skip to main content

On the Feasibility of Discovering Meta-Patterns from a Data Ensemble

  • Conference paper
  • First Online:
Discovery Science (DS 2015)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9356))

Included in the following conference series:

  • 943 Accesses

Abstract

We introduce meta-pattern discovery from a data ensemble, a new paradigm of pattern discovery which goes beyond the KDD process model. A data ensemble, which represents a set of data sets, seems to be more natural as a model of the big data (We focus on the volume and velocity aspects of the big data.). We propose two kinds of meta-patterns, each of which specifies patterns such as clusters for a set of data sets, for an unsupervised setting and a supervised one. Our solutions for these settings were shown to be feasible with one synthetic and two real data ensembles by experiments.

E. Suzuki—A part of this research was supported by Grant-in-Aid for Scientific Research 25280085 and 15K12100 from the Japanese Ministry of Education, Culture, Sports, Science and Technology.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In this paper we use the same values and denote them \(\beta \).

  2. 2.

    We adopt the standard procedure of using the Laplace correction.

  3. 3.

    This condition is our modification to the original BIRCH.

  4. 4.

    Note that this solution is independent from the one in the previous section. A semi-supervised, hybrid solution is beyond the scope of this paper.

  5. 5.

    We use the past tense for our past actions.

  6. 6.

    Preliminary experiments showed that the number of the random restart has a minor influence to the performance as long as it is not extremely small.

  7. 7.

    Due to the good performance under these conditions, we believe that our method outperforms sampling-based k-means algorithms as well as state-of-the-art methods.

  8. 8.

    As feasibility study, we did not compare our method with other methods.

  9. 9.

    We used the default setting of ELLA http://www.seas.upenn.edu/~eeaton/publications.html.

References

  1. Aggarwal, C.C., Han, J., Wang, J., Yu, P.S.: A framework for clustering evolving data streams. In: Proceedings of VLDB 2003, pp. 81–92 (2003)

    Google Scholar 

  2. DuMouchel, W., Volinsky, C., Johnson, T., Cortes, C., Pregibon, D.: Squashing flat files flatter. In: Proceedings of KDD 1999, pp. 6–15 (1999)

    Google Scholar 

  3. Erna, A., Yu, L., Zhao, K., Chen, W., Suzuki, E.: Facial expression data constructed with Kinect and their clustering stability. In: Ślȩzak, D., Schaefer, G., Vuong, S.T., Kim, Y.-S. (eds.) AMT 2014. LNCS, vol. 8610, pp. 421–431. Springer, Heidelberg (2014)

    Google Scholar 

  4. Fayyad, U.M., Piatetsky-Shapiro, G., Smyth, P.: From data mining to knowledge discovery: an overview. In: Advances in Knowledge Discovery and Data Mining, pp. 1–34. AAAI/MIT Press, Menlo Park (1996)

    Google Scholar 

  5. Feldman, D., Schmidt, M., Sohler, C.: Turning big data into tiny data: constant-size coresets for \(k\)-means, PCA and projective clustering. In: Proceedings of SODA 2013, pp. 1434–1453 (2013)

    Google Scholar 

  6. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51(3), 455–500 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  7. Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500), 2323–2326 (2000)

    Article  Google Scholar 

  8. Ruvolo, P., Eaton, E.: ELLA: an efficient lifelong learning algorithm. In: Proceedings of ICML, vol. 1, pp. 507–515 (2013)

    Google Scholar 

  9. Seidl, T., Assent, I., Kranen, P., Krieger, R., Herrmann, J.: Indexing density models for incremental learning and anytime classification on data streams. In: Proceedings of EDBT 2009, pp. 311–322 (2009)

    Google Scholar 

  10. Zhang, D., Zhou, Z.-H., Chen, S.: Semi-supervised dimensionality reduction. In: Proceedings of SDM 2007, pp. 629–634 (2007)

    Google Scholar 

  11. Zhang, T., Ramakrishnan, R., Livny, M.: BIRCH: a new data clustering algorithm and its applications. Data Min. Knowl. Discov. 1(2), 141–182 (1997)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Einoshin Suzuki .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Suzuki, E. (2015). On the Feasibility of Discovering Meta-Patterns from a Data Ensemble. In: Japkowicz, N., Matwin, S. (eds) Discovery Science. DS 2015. Lecture Notes in Computer Science(), vol 9356. Springer, Cham. https://doi.org/10.1007/978-3-319-24282-8_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-24282-8_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-24281-1

  • Online ISBN: 978-3-319-24282-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics