A Comparison between Multi-Layer Perceptrons and Convolutional Neural Networks for Text Image Super-Resolution

Clément Peyrard 1, 2 Franck Mamalet 1 Christophe Garcia 2
2 imagine - Extraction de Caractéristiques et Identification
LIRIS - Laboratoire d'InfoRmatique en Image et Systèmes d'information
Abstract : We compare the performances of several Multi-Layer Perceptrons (MLPs) and Convolutional Neural Networks (ConvNets) for single text image Super-Resolution. We propose an example-based framework for both MLP and ConvNet, where a non-linear mapping between pairs of patches and high-frequency pixel values is learned. We then demonstrate that for equivalent complexity, ConvNets are better than MLPs at predicting missing details in upsampled text images. To evaluate the performances, we make use of a recent database (ULR-textSISR-2013a) along with different quality measures. We show that the proposed methods outperforms sparse coding-based methods for this database.
Complete list of metadatas

https://hal.archives-ouvertes.fr/hal-01260671
Contributor : Clément Peyrard <>
Submitted on : Friday, January 22, 2016 - 2:46:27 PM
Last modification on : Wednesday, November 20, 2019 - 2:44:33 AM

Links full text

Identifiers

Citation

Clément Peyrard, Franck Mamalet, Christophe Garcia. A Comparison between Multi-Layer Perceptrons and Convolutional Neural Networks for Text Image Super-Resolution. International Conference on Computer Vision Theory and Applications, Mar 2015, Berlin, Germany. ⟨10.5220/0005297200840091⟩. ⟨hal-01260671⟩

Share

Metrics

Record views

506