Skip to Main content Skip to Navigation
Conference papers

Handwriting Styles: Benchmarks and Evaluation Metrics

Abstract : Extracting styles of handwriting is a challenging problem, since the style themselves are not well defined. It is a key component to develop systems with more personalized experiences for humans. In this paper, we propose baseline benchmarks, in order to set anchors to estimate the relative quality of different handwriting style methods. This will be done using deep learning techniques, which have shown remarkable results in different machine learning tasks, learning classification, regression, and most relevant to our work, generating temporal sequences. We discuss the challenges associated with evaluating our methods, which is related to evaluation of generative models in general. We then propose evaluation metrics, which we find relevant to this problem, and we discuss how we evaluate the performance metrics. In this study, we use IRON-OFF dataset [1]. To the best of our knowledge, no existing benchmarks or evaluation metrics for this task exit yet, and this dataset has not been used before in the context of handwriting synthesis.
Document type :
Conference papers
Complete list of metadatas

Cited literature [30 references]  Display  Hide  Download
Contributor : Damien Pellier <>
Submitted on : Monday, October 22, 2018 - 2:29:02 PM
Last modification on : Thursday, November 19, 2020 - 1:02:26 PM
Long-term archiving on: : Wednesday, January 23, 2019 - 2:32:36 PM


Files produced by the author(s)


  • HAL Id : hal-01900765, version 1



Omar Mohammed, Gérard Bailly, Damien Pellier. Handwriting Styles: Benchmarks and Evaluation Metrics. IEEE International Workshop on Deep and Transfer Learning (DTL 2018), Oct 2018, Valencia, Spain. ⟨hal-01900765⟩



Record views


Files downloads