Skip to main content

Impressions2Font: Generating Fonts by Specifying Impressions

  • Conference paper
  • First Online:
Document Analysis and Recognition – ICDAR 2021 (ICDAR 2021)

Abstract

Various fonts give us various impressions, which are often represented by words. This paper proposes Impressions2Font (Imp2Font) that generates font images with specific impressions. Imp2Font is an extended version of conditional generative adversarial networks (GANs). More precisely, Imp2Font accepts an arbitrary number of impression words as the condition to generate the font images. These impression words are converted into a soft-constraint vector by an impression embedding module built on a word embedding technique. Qualitative and quantitative evaluations prove that Imp2Font generates font images with higher quality than comparative methods by providing multiple impression words or even unlearned words.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In this paper, we use the term “impression” in a broader meaning; some impression is described by words that relate more to font shapes, such as sans-serif, rather than subjective impression.

  2. 2.

    These two differences make it very difficult to fairly compare Wang et al. [20] and our proposed method.

  3. 3.

    As noted later, each impression word is converted to a semantic vector by word2vec [13]. Therefore, we remove too rare impression words that are not included even in the 3-million English vocabulary for training word2vec. This results in \(K=1,574\) impression words that we used in the following. Note that an impression word with hyphenation is split into sub-words, and then its semantic vector is derived by taking the sum of the semantic vectors of the sub-words.

  4. 4.

    “HERONS” is a common word to check the font style since it contains sufficient variations of stroke shapes.

References

  1. Azadi, S., Fisher, M., Kim, V.G., Wang, Z., Shechtman, E., Darrell, T.: Multi-content GAN for few-shot font style transfer. In: CVPR (2018)

    Google Scholar 

  2. Cha, J., Chun, S., Lee, G., Lee, B., Kim, S., Lee, H.: Few-shot compositional font generation with dual memory. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12364, pp. 735–751. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58529-7_43

    Chapter  Google Scholar 

  3. Chen, T., Wang, Z., Xu, N., Jin, H., Luo, J.: Large-scale tag-based font retrieval with generative feature learning. In: ICCV (2019)

    Google Scholar 

  4. Davis, R.C., Smith, H.J.: Determinants of feeling tone in type faces. J. Appl. Psychol. 17(6), 742–764 (1933)

    Article  Google Scholar 

  5. Goodfellow, I.J., et al.: Generative adversarial networks. arXiv preprint arXiv:1406.2661 (2014)

  6. Hayashi, H., Abe, K., Uchida, S.: GlyphGAN: style-consistent font generation based on generative adversarial networks. Knowledge-Based Syst. 186, 104927 (2019)

    Article  Google Scholar 

  7. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local nash equilibrium. In: NIPS (2017)

    Google Scholar 

  8. Ikoma, M., Iwana Brian, K., Uchida, S.: Effect of text color on word embeddings. In: DAS (2020)

    Google Scholar 

  9. Jiang, Y., Lian, Z., Tang, Y., Xiao, J.: DCFont: an end-to-end deep Chinese font generation system. In: SIGGRAPH Asia (2017)

    Google Scholar 

  10. Kaneko, T., Ushiku, Y., Harada, T.: Class-distinct and class-mutual image generation with GANs. In: BMVC (2019)

    Google Scholar 

  11. Lyu, P., Bai, X., Yao, C., Zhu, Z., Huang, T., Liu, W.: Auto-encoder guided GAN for Chinese calligraphy synthesis. In: ICDAR, vol. 1, pp. 1095–1100 (2017)

    Google Scholar 

  12. Mao, Q., Lee, H.Y., Tseng, H.Y., Ma, S., Yang, M.H.: Mode seeking generative adversarial networks for diverse image synthesis. In: CVPR (2019)

    Google Scholar 

  13. Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: NIPS (2013)

    Google Scholar 

  14. Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014)

  15. Odena, A., Olah, C., Shlens, J.: Conditional image synthesis with auxiliary classifier GANs. In: ICML (2017)

    Google Scholar 

  16. O’Donovan, P., Lībeks, J., Agarwala, A., Hertzmann, A.: Exploratory font selection using crowdsourced attributes. ACM Trans. Graph. 33(4), 92 (2014)

    Google Scholar 

  17. Poffenberger, A.T., Franken, R.: A study of the appropriateness of type faces. J. Appl. Psychol. 7(4), 312–329 (1923)

    Article  Google Scholar 

  18. Shirani, A., Dernoncourt, F., Echevarria, J., Asente, P., Lipka, N., Solorio, T.: Let me choose: from verbal context to font selection. In: ACL (2020)

    Google Scholar 

  19. Vijayakumar, A., Vedantam, R., Parikh, D.: Sound-Word2Vec: learning word representations grounded in sounds. In: EMNLP (2017)

    Google Scholar 

  20. Wang, Y., Gao, Y., Lian, Z.: Attribute2font: creating fonts you want from attributes. ACM Trans. Graph. 39(4), 69 (2020)

    Google Scholar 

  21. Wang, Z., et al.: DeepFont: identify your font from an image. In: ACM Multimedia (2015)

    Google Scholar 

  22. Zhu, A., Lu, X., Bai, X., Uchida, S., Iwana, B.K., Xiong, S.: Few-shot text style transfer via deep feature similarity. IEEE Trans. Image Proc. 29, 6932–6946 (2020)

    Article  Google Scholar 

  23. Zramdini, A., Ingold, R.: Optical font recognition using typographical features. IEEE Trans. Patt. Anal. Mach. Intell. 20(8), 877–882 (1998)

    Article  Google Scholar 

Download references

Acknowledgment

This work was supported by JSPS KAKENHI Grant Number JP17H06100.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Seiya Matsuda .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Matsuda, S., Kimura, A., Uchida, S. (2021). Impressions2Font: Generating Fonts by Specifying Impressions. In: Lladós, J., Lopresti, D., Uchida, S. (eds) Document Analysis and Recognition – ICDAR 2021. ICDAR 2021. Lecture Notes in Computer Science(), vol 12823. Springer, Cham. https://doi.org/10.1007/978-3-030-86334-0_48

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-86334-0_48

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-86333-3

  • Online ISBN: 978-3-030-86334-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics