Video Scene analysis for a configurable hardware accelerator dedicated to Smart Camera - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2012

Video Scene analysis for a configurable hardware accelerator dedicated to Smart Camera

Résumé

According to the Center for Research and Prevention of Injuries report, fall-caused injuries of elderly people in UE- 27 are five times as frequent as other injury causes which reduce considerably their mobility and independence. Among the diverse applications of computer vision systems, object detection and event recognition are of the most prominent related recognition and motion analysis, that is, researchers had the idea to spread it in fall detection. The fall event, extracted automatically from the video scene represents itself, crucial information that can be used to alert emergency. In this context, visual information on the corresponding scene is highly important in order to take the "right" decision. Therefore, video compression may be included into the acquisition system to reduce data-bandwidth. Meanwhile, detecting such particular situations allows the video compression to be controlled. For instance, the compression can be reduced after a fall to provide more details on the scene or specific compression rates can be applied on the background and the regions of interest. The video codec should be able to adjust its coding performances (bit-rate, PSNR) according to the detected events into the video scene. For reaching such flexibility, we propose to focus on a key processing stage: the motion estimation. Ideally, a smart camera which embedded a fall detection method as well as an adaptive video codec would be very significant solution for this field. We propose two key elements for the design of such system: a robust fall detection method and a configurable motion estimation accelerator. The fall detection method is based on basic feature extraction (bounding box aspect ratio and position best fitting ellipse characteristics, etc.) from robust tracking algorithm, followed by a SVM classifier and a final decision stage. We evaluated the robustness of our method using a realistic dataset and we evaluated the ability of some standard transformations to improve the classification performances. Experiments show that the best result (specificify=0.99984, accuracy=0.99822, precision = 0.94737 and recall=0.9), is obtained combining the Fast Fourier Transform, the Wavelet transform and the first derivative, i.e. the velocity of the first level feature. The hardware implementation of the motion estimator which enables the Integer Motion Estimation (IME) algorithms to be modified and the Fractional Motion Estimation (FME) and Variable Block Size (VBS) to be selected and adjusted according the performances to be reached. This low-cost implementation enables to reach high-speed performances. Using a Diamond Search (DS) method for IME stage, a 1080 HD (1920x1088) video stream can be processed up to 223 fps. Moreover for FME mode, same video streams can be processed at frame rate of 29 fps at 250 MHz (around 232K macroblocks/s). We plan in the next future to improve the robustness embedding the training system, allowing the people to update easily the SVM model, in order to take into account the specificities of the final home-user.
Fichier non déposé

Dates et versions

hal-00687276 , version 1 (12-04-2012)

Identifiants

  • HAL Id : hal-00687276 , version 1

Citer

Imen Charfi, Wajdi Elhamzi, Julien Dubois, Mohamed Atri, Johel Miteran. Video Scene analysis for a configurable hardware accelerator dedicated to Smart Camera. IEEE/ACM First Workshop on Architecture of Smart Camera, Apr 2012, Clermont Ferrand, France. pp.1. ⟨hal-00687276⟩
238 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More