Auto-Adaptive Multi-Sensor Architecture - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2016

Auto-Adaptive Multi-Sensor Architecture

Résumé

To overcome luminosity problems, modern embedded vision systems often integrate technologically heterogeneous sensors. Also, it has to provide different functionalities such as photo or video mode, image improvement or data fusion, according to the user environment. Therefore, nowadays vision systems should be context-aware and adapt their performance parameters automatically. In this context, we propose a novel auto-adaptive architecture enabling on-the-fly and automatic frame rate and resolution adaptation by a frequency tuning method. This method also intends to reduce power consumption as an alternative to existing power gating method. Performance evaluation in a FPGA implementation demonstrates an inter-frame adaptation capability with a relative low area overhead. I. INTRODUCTION From decades, the ability of computer vision systems increases thanks to the multiplication of integrated sensors. Multi-sensor systems enable many high-level vision applications such as stereo vision, data fusion [1] or 3D stereo view [2]. Also smart camera networks take advantage of the multi-sensor concept for large-scale surveillance applications [3]. More and more vision systems involve several heterogeneous sensors such as color, infrared or intensified low-light sensor [4] to overcome the variable luminosity conditions or improve the application robustness. Frequently, the considered vision system accomplishes various tasks such as video streaming, photo capture or high level processing (i.e. face detection, object tracking, ...). Each one of these tasks imposes different performance computing ability to the hardware resources, according to the applicative context and used sensor. That is why, nowadays vision systems have to be context-aware and to possess the ability to adapt their performance according to the user environment [5]. Fig. 1 illustrates the differences between video and photo user mode parameters: latency, frame rate, resolution, image quality and power consumption. While a video mode needs a high frame rate and low latency, a photo mode rather expects a higher resolution and higher image quality. In this context, we expect the system architecture adapt itself on-the-fly to the required frame rate or resolution while minimizing the use-case transition time when the user mode changes. In addition, the frame rate and the resolution of the involved sensors are not supposed to be known in advance. Numerous adaptable architectures exist for high-performance image processing [6]–[8] and also even for energy aware heterogeneous vision systems [2], they do not enable such dynamic adaptation of the frame rate or the resolution. In this paper, we propose a novel pixel frequency tuning approach for heterogeneous multi-sensor vision systems. The
Fichier principal
Vignette du fichier
ISCAS_2016_for_Review.pdf (1.68 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01265219 , version 1 (01-02-2016)

Identifiants

  • HAL Id : hal-01265219 , version 1

Citer

Ali Isavudeen, Nicolas Ngan, Eva Dokladalova, Mohamed Akil. Auto-Adaptive Multi-Sensor Architecture. IEEE International symposium on circuits and systems, ISCAS 2016, IEEE, May 2016, Montréal, Canada. ⟨hal-01265219⟩
177 Consultations
236 Téléchargements

Partager

Gmail Facebook X LinkedIn More