Skip to main content

AbstractNet: A Generative Model for High Density Inputs

  • Conference paper
  • First Online:
Machine Learning, Optimization, and Big Data (MOD 2017)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 10710))

Included in the following conference series:

  • 2948 Accesses

Abstract

This paper introduces AbstractNet, a generative model for high density inputs. The model suggests a method that uses unsupervised learning to generate feature maps. The model drastically improves the performances of raw audio generation by reducing the required amount of input data and computing power necessary to achieve a similar result when compared to the state of the art.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  2. Nayebi, A., Vitelli, M.: GRUV: Algorithmic Music Generation using Recurrent Neural Networks (2015)

    Google Scholar 

  3. van den Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kavukcuoglu, K.: Wavenet: A generative model for raw audio. CoRR abs/1609.03499 (2016)

    Google Scholar 

  4. Dutilleux, P.: An implementation of the “algorithme à trous” to compute the wavelet transform. In: Combes, J.M., Grossmann, A., Tchamitchian, P. (eds.) Wavelets. Inverse Problems and Theoretical Imaging, pp. 298–304. Springer, Heidelberg (1989). https://doi.org/10.1007/978-3-642-97177-8_29

    Chapter  Google Scholar 

  5. Holschneider, M., Kronland-Martinet, R., Morlet, J., Tchamitchian, P.: A real-time algorithm for signal analysis with the help of the wavelet transform. In: Combes, J.M., Grossmann, A., Tchamitchian, P. (eds.) Wavelets. Inverse Problems and Theoretical Imaging, pp. 286–297. Springer, Heidelberg (1990). https://doi.org/10.1007/978-3-642-75988-8_28

    Google Scholar 

  6. Akaike, H.: Fitting autoregressive models for prediction. Ann. Inst. Stat. Math. 21(1), 243–247 (1969)

    Article  MathSciNet  Google Scholar 

  7. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Aistats, vol. 9, pp. 249–256, May 2010

    Google Scholar 

  8. Montana, D.J., Davis, L.: Training feedforward neural networks using genetic algorithms. In: IJCAI, vol. 89, pp. 762–767, August 1989

    Google Scholar 

  9. Shazeer, N., Mirhoseini, A., Maziarz, K., Davis, A., Le, Q., Hinton, G., Dean, J.: Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538 (2017)

  10. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)

    Article  MathSciNet  Google Scholar 

  11. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning Representations by Back-Propagating Errors (1988)

    Google Scholar 

  12. Adali, T., Liu, X., Sonmez, M.K.: Conditional distribution learning with neural networks and its application to channel equalization. IEEE Trans. Sig. Process. 45(4), 1051–1064 (1997)

    Article  Google Scholar 

  13. Cox, G.: On the relationship between entropy and meaning in music: an exploration with recurrent neural networks. In: Proceedings of the Annual Meeting of the Cognitive Science Society (2010)

    Google Scholar 

  14. Ahalt, S.C., Krishnamurthy, A.K., Chen, P., Melton, D.E.: Competitive learning algorithms for vector quantization. Neural Netw. 3(3), 277–290 (1990)

    Article  Google Scholar 

  15. Taylor, P.: Text-To-Speech Synthesis. Cambridge university press, Cambridge (2009)

    Google Scholar 

  16. Ze, H., Senior, A., Schuster, M.: Statistical parametric speech synthesis using deep neural networks. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2013, pp. 7962–7966. IEEE, May 2013

    Google Scholar 

  17. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 60(1–4), 259–268 (1992)

    Article  MathSciNet  Google Scholar 

  18. Deng, L.Y.: The Cross-Entropy Method: A Unified Approach to Combinatorial Optimization, Monte-Carlo Simulation, and Machine Learning (2006)

    Google Scholar 

  19. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Bengio, Y.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)

    Google Scholar 

  20. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013)

  21. Larsen, A.B.L., Sønderby, S.K., Larochelle, H., Winther, O.: Autoencoding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300 (2015)

Download references

Acknowledgements

I want to thank Alain Lioret from Université Paris 8, Aurélien Schlossman from Ariane Group, Nicolas Vidal, Martin Tricaud and everyone at Ecole Superieure De Génie Informatique (ESGI). I would also like to thank all the people who believed in this project.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Boris Musarais .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Musarais, B. (2018). AbstractNet: A Generative Model for High Density Inputs. In: Nicosia, G., Pardalos, P., Giuffrida, G., Umeton, R. (eds) Machine Learning, Optimization, and Big Data. MOD 2017. Lecture Notes in Computer Science(), vol 10710. Springer, Cham. https://doi.org/10.1007/978-3-319-72926-8_38

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-72926-8_38

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-72925-1

  • Online ISBN: 978-3-319-72926-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics