Skip to main content
Log in

Cross-lingual sentiment transfer with limited resources

  • Published:
Machine Translation

Abstract

We describe two transfer approaches for building sentiment analysis systems without having gold labeled data in the target language. Unlike previous work that is focused on using only English as the source language and a small number of target languages, we use multiple source languages to learn a more robust sentiment transfer model for 16 languages from different language families. Our approaches explore the potential of using an annotation projection approach and a direct transfer approach using cross-lingual word representations and neural networks. Whereas most previous work relies on machine translation, we show that we can build cross-lingual sentiment analysis systems without machine translation or even high quality parallel data. We have conducted experiments assessing the availability of different resources such as in-domain parallel data, out-of-domain parallel data, and in-domain comparable data. Our experiments show that we can build a robust transfer system whose performance can in some cases approach that of a supervised system.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. The source code for the direct transfer model is available here: https://github.com/rasoolims/senti-lstm.

  2. In the case of using English as the supervised source language, we also append additional positive and negative indicator features as additional unigrams and calculate their log ratio counts. These indicators are extracted from the sentiment lexicons of Wilson et al. (2005) and Hu and Liu (2004).

  3. https://www.wiktionary.org/.

  4. Not all tweets from Mozetič et al. (2016) are available anymore (some of them were deleted).

  5. http://alt.qcri.org/semeval2017/task4/.

  6. http://tanzil.net/trans/.

  7. We excluded a subset of the translations from Russian and English that are interpretations as opposed to translations. For Russian, we use the Krachkovsky, Kuliev, Osmanov, Porokhova, and Sablukov translations and for English, we use the Ahmedali, Arberry, Daryabadi, Itani, Mubarakpuri, Pickthall, Qarai, Qaribullah, Sahih, Sarwar, Shakir, Wahiduddin, and Yusufali translations.

  8. LDC2016E30_LORELEI_Mandarin.

  9. LDC2016E93_LORELEI_Farsi.

  10. LDC2016E99_LORELEI_Hungarian.

  11. LDC2016E89_LORELEI_Arabic.

  12. LDC2016E95_LORELEI_Russian.

  13. LDC2016E97_LORELEI_Spanish.

  14. LDC2016E57_LORELEI_IL3_Incident_Language_Pack_

    for_Year_1_Eval .

  15. https://pypi.python.org/pypi/wikipedia.

  16. Madamira is used in low-resource mode with the form-based ATB_BWFORM tokenization scheme.

  17. https://github.com/sobhe/hazm.

  18. https://opennlp.apache.org/.

  19. http://universaldependencies.org/.

  20. https://code.google.com/archive/p/word2vec/.

  21. http://scikit-learn.org/stable/.

  22. We note that the best scoring system for Arabic on the SemEval 2017 test set had a macro-average recall and accuracy of 58 when training with supervised Arabic data (Rosenthal et al. 2017).

References

  • Abdul-Mageed M, Diab MT (2011) Subjectivity and sentiment annotation of modern standard Arabic newswire. In: Proceedings of the 5th linguistic annotation workshop, Association for Computational Linguistics, pp 110–118

  • Ammar W, Mulcaire G, Tsvetkov Y, Lample G, Dyer C, Smith NA (2016) Massively multilingual word embeddings. arXiv:1602.01925

  • Baccianella S, Esuli A, Sebastiani F (2010) Sentiwordnet 3.0: an enhanced lexical resource for sentiment analysis and opinion mining. LREC 10:2200–2204

    Google Scholar 

  • Balahur A, Turchi M (2014) Comparative experiments using supervised learning and machine translation for multilingual sentiment analysis. Comput Speech Lang 28(1):56–75

    Article  Google Scholar 

  • Berg-Kirkpatrick T, Burkett D, Klein D (2012) An empirical investigation of statistical significance in NLP. In: Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning, Association for Computational Linguistics, pp 995–1005. http://aclweb.org/anthology/D12-1091

  • Brooke J, Tofiloski M, Taboada M (2009) Cross-linguistic sentiment analysis: from english to spanish. In: RANLP, pp 50–54

  • Chang PC, Galley M, Manning CD (2008) Optimizing Chinese word segmentation for machine translation performance. In: Proceedings of the third workshop on statistical machine translation (StatMT ’08), Association for Computational Linguistics, Stroudsburg, PA, pp 224–232. http://dl.acm.org/citation.cfm?id=1626394.1626430

  • Chen X, Sun Y, Athiwaratkun B, Cardie C, Weinberger K (2016) Adversarial deep averaging networks for cross-lingual sentiment classification. arXiv:1606.01614

  • Christodouloupoulos C, Steedman M (2014) A massively parallel corpus: the bible in 100 languages. Language resources and evaluation, pp 1–21

  • Duh K, Fujino A, Nagata M (2011) Is machine translation ripe for cross-lingual sentiment classification? In: Proceedings of the 49th annual meeting of the Association for Computational Linguistics: human language technologies: short papers—volume 2, Association for Computational Linguistics (HLT ’11), Stroudsburg, PA, pp 429–433. http://dl.acm.org/citation.cfm?id=2002736.2002823

  • Fan RE, Chang KW, Hsieh CJ, Wang XR, Lin CJ (2008) Liblinear: a library for large linear classification. J Mach Learn Res 9:1871–1874

    MATH  Google Scholar 

  • Faruqui M, Dyer C (2014) Improving vector space word representations using multilingual correlation. In: Proceedings of the 14th conference of the european chapter of the Association for Computational Linguistics, Association for Computational Linguistics, Gothenburg, pp 462–471. http://www.aclweb.org/anthology/E14-1049

  • Gouws S, Søgaard A (2015) Simple task-specific bilingual word embeddings. In: Proceedings of the 2015 conference of the North American chapter of the Association for Computational Linguistics: human language technologies, Association for Computational Linguistics, Denver, Colorado, pp 1386–1390. http://www.aclweb.org/anthology/N15-1157

  • Hermann KM, Blunsom P (2013) Multilingual distributed representations without word alignment. arXiv:1312.6173v4

  • Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780

    Article  Google Scholar 

  • Hosseini P, Ahmadian Ramaki A, Maleki H, Anvari M, Mirroshandel SA (2015) SentiPers: a sentiment analysis corpus for Persian

  • Hu M, Liu B (2004) Mining and summarizing customer reviews. In: Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, pp 168–177

  • Joshi A, Balamurali A, Bhattacharyya P (2010) A fall-back strategy for sentiment analysis in Hindi: a case study. In: Proceedings of the 8th ICON

  • Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. CoRR abs/1412.6980. arXiv:1412.6980v9

  • Koehn P (2005) Europarl: a parallel corpus for statistical machine translation. MT Summit 5:79–86

    Google Scholar 

  • Liu B (2012) Sentiment analysis and opinion mining. Synth Lect Hum Lang Technol 5(1):1–167

    Article  Google Scholar 

  • Meng X, Wei F, Liu X, Zhou M, Xu G, Wang H (2012) Cross-lingual mixture model for sentiment classification. In: Proceedings of the 50th annual meeting of the Association for Computational Linguistics (volume 1: long papers), Association for Computational Linguistics, Jeju Island, pp 572–581. http://www.aclweb.org/anthology/P12-1060

  • Mihalcea R, Banea C, Wiebe J (2007) Learning multilingual subjective language via cross-lingual projections. In: Proceedings of the 45th annual meeting of the association of computational linguistics, Association for Computational Linguistics, Prague, Czech Republic, pp 976–983. http://www.aclweb.org/anthology/P07-1123

  • Mozetič I, Grčar M, Smailović J (2016) Twitter sentiment for 15 European languages. http://hdl.handle.net/11356/1054, Slovenian language resource repository CLARIN.SI

  • Mukund S, Srihari RK (2010) A vector space model for subjectivity classification in Urdu aided by co-training. In: Proceedings of the 23rd international conference on computational linguistics: posters, Association for Computational Linguistics, pp 860–868

  • Nair V, Hinton GE (2010) Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th international conference on machine learning (ICML-10), pp 807–814

  • Neubig G, Dyer C, Goldberg Y, Matthews A, Ammar W, Anastasopoulos A, Ballesteros M, Chiang D, Clothiaux D, Cohn T, et al (2017) Dynet: the dynamic neural network toolkit. arXiv:1701.03980

  • Och FJ, Ney H (2003) A systematic comparison of various statistical alignment models. Comput Linguist 29(1):19–51

    Article  MATH  Google Scholar 

  • Pasha A, Al-Badrashiny M, Diab MT, El Kholy A, Eskander R, Habash N, Pooleery M, Rambow O, Roth R (2014) Madamira: a fast, comprehensive tool for morphological analysis and disambiguation of Arabic. LREC 14:1094–1101

    Google Scholar 

  • Rasooli MS, Collins M (2017) Cross-lingual syntactic transfer with limited resources. Trans Assoc Comput Linguist 5:279–293. https://transacl.org/ojs/index.php/tacl/article/view/922

  • Rosenthal S, Farra N, Nakov P (2017) Semeval-2017 task 4: sentiment analysis in Twitter. In: Proceedings of the 11th international workshop on semantic evaluation (SemEval-2017), Association for Computational Linguistics, Vancouver, pp 502–518. http://www.aclweb.org/anthology/S17-2088

  • Salameh M, Mohammad SM, Kiritchenko S (2015) Sentiment after translation: a case-study on Arabic social media posts. In: Proceedings of the 2015 conference of the North American chapter of the Association for Computational Linguistics: human language technologies, pp 767–777

  • Socher R, Perelygin A, Wu JY, Chuang J, Manning CD, Ng AY, Potts C, et al (2013) Recursive deep models for semantic compositionality over a sentiment treebank. In: Proceedings of the conference on empirical methods in natural language processing (EMNLP), Citeseer, vol 1631, p 1642

  • Stratos K, Kim Dk, Collins M, Hsu D (2014) A spectral algorithm for learning class-based n-gram models of natural language. In: Proceedings of the association for uncertainty in artificial intelligence

  • Täckström O, McDonald R, Uszkoreit J (2012) Cross-lingual word clusters for direct transfer of linguistic structure. In: Proceedings of the 2012 conference of the North American chapter of the Association for Computational Linguistics: human language technologies, Association for Computational Linguistics, pp 477–487

  • Vulić I, Moens MF (2016) Bilingual distributed word representations from document-aligned comparable data. J Artif Intell Res 55:953–994

    MathSciNet  MATH  Google Scholar 

  • Wan X (2008) Using bilingual knowledge and ensemble techniques for unsupervised Chinese sentiment analysis. In: Proceedings of the 2008 conference on empirical methods in natural language processing, Association for Computational Linguistics, Honolulu, Hawaii, pp 553–561. http://www.aclweb.org/anthology/D08-1058

  • Wang S, Manning CD (2012) Baselines and bigrams: simple, good sentiment and topic classification. In: Proceedings of the 50th annual meeting of the Association for Computational Linguistics: short papers-volume 2, Association for Computational Linguistics, pp 90–94

  • Wick M, Kanani P, Pocock A (2015) Minimally-constrained multilingual embeddings via artificial code-switching. In: Workshop on transfer and multi-task learning: trends and new perspectives, Montreal, Canada

  • Wilson T, Wiebe J, Hoffmann P (2005) Recognizing contextual polarity in phrase-level sentiment analysis. In: Proceedings of the conference on human language technology and empirical methods in natural language processing, Association for Computational Linguistics, pp 347–354

  • Yu T, Hidey C, Rambow O, McKeown K (2017) Leveraging sparse and dense feature combinations for sentiment classification. arXiv:1708.03940

  • Zhang R, Lee H, Radev D (2016) Dependency sensitive convolutional neural networks for modeling sentences and documents. arXiv:1611.02361

  • Zhou G, He T, Zhao J (2014) Bridging the language gap: learning distributed semantics for cross-lingual sentiment classification. In: Zong C, Nie JY, Zhao D, Feng Y (eds) Nat Lang Process Chin Comput. Springer, Berlin, pp 138–149

    Google Scholar 

  • Zhou H, Chen L, Shi F, Huang D (2015) Learning bilingual sentiment word embeddings for cross-language sentiment classification. In: Proceedings of the 53rd annual meeting of the Association for Computational Linguistics and the 7th international joint conference on natural language processing (volume 1: long papers), Association for Computational Linguistics, Beijing, China, pp 430–440. http://www.aclweb.org/anthology/P15-1042

  • Zhou X, Wan X, Xiao J (2016a) Attention-based lstm network for cross-lingual sentiment classification. In: Proceedings of the 2016 conference on empirical methods in natural language processing, Association for Computational Linguistics, Austin, Texas, pp 247–256. https://aclweb.org/anthology/D16-1024

  • Zhou X, Wan X, Xiao J (2016b) Cross-lingual sentiment classification with bilingual document representation learning. In: Proceedings of the 54th annual meeting of the Association for Computational Linguistics (volume 1: long papers), Association for Computational Linguistics, Berlin, pp 1403–1412, http://www.aclweb.org/anthology/P16-1133

Download references

Acknowledgements

Noura Farra, Kathleen McKeown and Axinia Radeva were supported by DARPA LORELEI Grant HR0011-15-2-0041. Kathleen McKeown and Tao Yu were supported by DARPA DEFT Grant FA8750-12-2-0347. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S government. We thank the reviewers for their detailed and helpful comments. We thank the Uyghur native informant and Appen for arranging the annotation. We thank Zixiaofan (Brenda) Yang for preparing the Uyghur evaluation data and meeting the informant.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Noura Farra.

Additional information

This work was done while the author was at Columbia.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rasooli, M.S., Farra, N., Radeva, A. et al. Cross-lingual sentiment transfer with limited resources. Machine Translation 32, 143–165 (2018). https://doi.org/10.1007/s10590-017-9202-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10590-017-9202-6

Keywords

Navigation