Service interruption on Monday 11 July from 12:30 to 13:00: all the sites of the CCSD (HAL, Epiciences, SciencesConf, AureHAL) will be inaccessible (network hardware connection).
Skip to Main content Skip to Navigation
Conference papers


Abstract : We offer in this article, a method for text extraction in images issued from city scenes. This method is used in the French iTowns project (iTowns ANR project, 2008) to automatically enhance cartographic database by extracting text from geolocalized pictures of town streets. This task is difficult as 1. text in this environment varies in shape, size, color, orientation... 2. pictures may be blurred, as they are taken from a moving vehicle, and text may have perspective deformations, 3. all pictures are taken outside with various objects that can lead to false positives and in unconstrained conditions (especially light varies from one picture to the other). Then, we can not make the assumption on searched text. The only supposition is that text is not handwritten . Our process is based on two main steps: a new segmentation method based on morphological operator and a classification step based on a combination of multiple SVM classifiers. The description of our process is given in this article. The efficiency of each step is measured and the global scheme is illustrated on an example
Complete list of metadata

Cited literature [16 references]  Display  Hide  Download
Contributor : Beatriz Marcotegui Connect in order to contact the contributor
Submitted on : Monday, December 2, 2013 - 4:05:08 PM
Last modification on : Thursday, November 18, 2021 - 4:07:16 AM
Long-term archiving on: : Monday, March 3, 2014 - 3:50:29 AM


Files produced by the author(s)


  • HAL Id : hal-00906998, version 1


Jonathan Fabrizio, Matthieu Cord, Beatriz Marcotegui. TEXT EXTRACTION FROM STREET LEVEL IMAGES. CMRT09 - CityModels, Roads and Traffic, Sep 2009, Paris, France. pp.199-204. ⟨hal-00906998⟩



Record views


Files downloads