Skip to main content

Performance Evaluation of Pre-trained Models in Sarcasm Detection Task

  • Conference paper
  • First Online:
Web Information Systems Engineering – WISE 2021 (WISE 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 13081))

Included in the following conference series:

  • 1154 Accesses

Abstract

Sarcasm is a widespread phenomenon in social media such as Twitter or Instagram. As a critical task of Natural Language Processing (NLP), sarcasm detection plays an important role in many domains of semantic analysis, such as stance detection and sentiment analysis. Recently, pre-trained models (PTMs) on large unlabelled corpora have shown excellent performance in various tasks of NLP. PTMs have learned universal language representations and can help researchers avoid training a model from scratch. The goal of our paper is to evaluate the performance of various PTMs in the sarcasm detection task. We evaluate and analyse the performance of several representative PTMs on four well-known sarcasm detection datasets. The experimental results indicate that RoBERTa outperforms other PTMs and it is also better than the best baseline in three datasets. DistilBERT is the best choice for sarcasm detection task when computing resources are limited. However, XLNet may not be suitable for sarcasm detection task. In addition, we implement detailed grid search for four hyperparameters to investigate their impact on PTMs. The results show that learning rate is the most important hyperparameter. Furthermore, we also conduct error analysis by means of several sarcastic sentences to explore the reasons of detection failures, which provides instructive ideas for future research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Baruah, A., Das, K., Barbhuiya, F.A., Dey, K.: Context-aware sarcasm detection using bert. In: Fig-Lang@ACL (2020)

    Google Scholar 

  2. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. In: NAACL-HLT (2019)

    Google Scholar 

  3. Ghosh, A., Veale, T.: Magnets for sarcasm: making sarcasm detection timely, contextual and very personal. In: EMNLP (2017)

    Google Scholar 

  4. Guderlei, M., Aßenmacher, M.: Evaluating unsupervised representation learning for detecting stances of fake news. In: COLING (2020)

    Google Scholar 

  5. Hee, C.V., Lefever, E., Hoste, V.: Semeval-2018 task 3: irony detection in english tweets. In: SemEvalNAACL-HLT (2018)

    Google Scholar 

  6. Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: Albert: a lite bert for self-supervised learning of language representations. ArXiv (2020)

    Google Scholar 

  7. Lemmens, J., Burtenshaw, B., Lotfi, E., Markov, I., Daelemans, W.: Sarcasm detection using an ensemble approach. In: Fig-Lang@ACL (2020)

    Google Scholar 

  8. Liu, Y., et al.: Roberta: a robustly optimized bert pretraining approach (2019). ArXiv abs/1907.11692

    Google Scholar 

  9. Oprea, S., Magdy, W.: isarcasm: a dataset of intended sarcasm (2020). ArXiv abs/1911.03123

    Google Scholar 

  10. Ptácek, T., Habernal, I., Hong, J.: Sarcasm detection on Czech and English twitter. In: COLING (2014)

    Google Scholar 

  11. Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. ArXiv (2019)

    Google Scholar 

  12. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., Le, Q.V.: Xlnet: generalized autoregressive pretraining for language understanding. In: NeurIPS (2019)

    Google Scholar 

Download references

Acknowledgements

This work was supported by the National Key Research and Development Program of China No. 2018YFC0831703.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bin Zhou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, H., Song, X., Zhou, B., Wang, Y., Gao, L., Jia, Y. (2021). Performance Evaluation of Pre-trained Models in Sarcasm Detection Task. In: Zhang, W., Zou, L., Maamar, Z., Chen, L. (eds) Web Information Systems Engineering – WISE 2021. WISE 2021. Lecture Notes in Computer Science(), vol 13081. Springer, Cham. https://doi.org/10.1007/978-3-030-91560-5_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-91560-5_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-91559-9

  • Online ISBN: 978-3-030-91560-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics