Skip to main content

Transfer Learning with Local Smoothness Regularizer

  • Conference paper
Web Technologies and Applications (APWeb 2012)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 7235))

Included in the following conference series:

  • 2142 Accesses

Abstract

The main goal of transfer learning is to reuse related domain data to learn models for the target domain. In existing instance-transfer learning algorithms, the relevance of instances is estimated mainly according to small amount of labeled instances, and the generalization ability of these algorithms needs to be improved. To make the relevance estimation more reliable, we propose to use unlabeled target domain instances as additional training data. These instances would serve as new domain knowledge sources to help determining the relevance of related domain instances. Under the universal framework of boosting, we introduce local smoothness regularizer, and obtain new empirical loss function, where unlabeled instances are included. Gradient decent method is used to iteratively optimize the loss function, and we finally obtain a new instance-transfer learning algorithm. Experiment results on text datasets show that the new algorithm outperforms competitive algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Pan, J.S., Yang, Q.: A Survey on Transfer Learning. IEEE Transations on Knowledge and Data Engineering 22(10), 1345–1359 (2010)

    Article  Google Scholar 

  2. Dai, W., Yang, Q., Xue, G., Yu, Y.: Boosting for Transfer Learning. In: 24th International Conference on Machine Learning, pp. 193–200 (2007)

    Google Scholar 

  3. Shi, Y., Lan, Z., Liu, W., Bi, W.: Extending Semi-supervised Learning Mehthods for Inductive Transfer Learning. In: 9th IEEE International Conference on Data Mining, pp. 483–492 (2009)

    Google Scholar 

  4. Zhu, X.: Semi-supervised Learning Literature Survey. Technical report, Department of Computer Sciences, University of Wisconsin, Madison (2005)

    Google Scholar 

  5. Belkin, M., Niyogi, P., Sindhwani, V.: Manifold Regularization: A Geometric Framework for Learning from Examples. Technical report, University of Michigan (2005)

    Google Scholar 

  6. Quionero-Candela, J., Sugiyama, M., Schwaighofer, A., Lawrence, N.D.: Dataset Shift in Machine Learning. MIT Press, Cambridge (2009)

    Google Scholar 

  7. Pan, J.S., Ni, X., Kwok, J.T., Yang, Q.: Domain Adaptation via Transfer Component Analysis. In: 21th International Joint Conference on Artificial Intelligence, pp. 1187–1192 (2009)

    Google Scholar 

  8. Raina, R., Battle, A., Lee, H., Packer, B., Ng, A.Y.: Self-taught learning: transfer learning from unlabeled data. In: 24th International Conference on Machine Learning, pp. 759–766 (2007)

    Google Scholar 

  9. Freund, Y., Schapire, R.E.: A Decision-Theoretic Generalization of On-Line Learning and An Application to Boosting. Journal of Computer and System Science 55, 119–139 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  10. Mason, L., Bartlett, P., Baxter, J., Frean, M.: Functional Gradient Techniques for Combining Hypotheses. In: Advances in Large Margin Classifiers. MIT Press, Cambridge (2000)

    Google Scholar 

  11. Chen, K., Wang, S.: Regularized Boost for Semi-Supervised Learning. In: Advances in Neural Information Processing Systems 20. MIT Press, Cambridge (2007)

    Google Scholar 

  12. Bennett, K., Demiriz, A., Maclin, R.: Exploiting Unlabeled Data in Ensemble Methods. In: 8th International Conference on Knowledge Discovery and Data Mining, pp. 289–296 (2002)

    Google Scholar 

  13. Joachims, T.: Transductive Inference for Text Classification using Support Vector Machines. In: 16th International Conference on Machine Learning, pp. 200–209 (1999)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hong, J., Chen, B., Yin, J. (2012). Transfer Learning with Local Smoothness Regularizer. In: Sheng, Q.Z., Wang, G., Jensen, C.S., Xu, G. (eds) Web Technologies and Applications. APWeb 2012. Lecture Notes in Computer Science, vol 7235. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-29253-8_45

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-29253-8_45

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-29252-1

  • Online ISBN: 978-3-642-29253-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics