Ontology for a voice transcription of OpenStreetMap data: the case of space apprehension by visually impaired persons

Abstract : In this paper, we propose a vocal ontology of Open-StreetMap data for the apprehension of space by visually impaired people. Indeed, the platform based on produsage gives a freedom to data producers to choose the descriptors of geocoded locations. Unfortunately , this freedom, called also folksonomy leads to complicate subsequent searches of data. We try to solve this issue in a simple but usable method to extract data from OSM databases in order to send them to visually impaired people using Text To Speech technology. We focus on how to help people suffering from visual disability to plan their itinerary, to comprehend a map by querying computer and getting information about surrounding environment in a mono-modal human-computer dialogue.
Document type :
Conference papers
Complete list of metadatas

Cited literature [6 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01533064
Contributor : Said Boularouk <>
Submitted on : Tuesday, June 6, 2017 - 2:04:32 PM
Last modification on : Friday, March 22, 2019 - 11:34:07 AM
Long-term archiving on : Thursday, September 7, 2017 - 12:26:18 PM

File

waset_Londres.pdf
Files produced by the author(s)

Licence


Public Domain

Identifiers

  • HAL Id : hal-01533064, version 1

Citation

Said Boularouk, Didier Josselin, Eitan Altman. Ontology for a voice transcription of OpenStreetMap data: the case of space apprehension by visually impaired persons. World Academy of Science, Engineering and Technology, May 2017, London, United Kingdom. ⟨hal-01533064⟩

Share

Metrics

Record views

330

Files downloads

191