index - M2SFA2 - Multi-camera and Multi-modal Sensor Fusion Algorithms and Applications Access content directly

Workshop on

Multi-camera and Multi-modal Sensor Fusion Algorithms and Applications (M2SFA2)

October 18, 2008, Marseille, France

In conjunction with The 10th European Conference on Computer Vision (ECCV)

Call for papers

Advances in sensing technologies as well as the increasing availability of computational power and efficient bandwidth usage methods are favouring the emergence of applications based on distributed systems combining multiple cameras and other sensing modalities. These applications include audiovisual scene analysis, immersive human-computer interfaces, occupancy sensing and event detection for smart environment applications, automated collection, summarization and distribution of multi-sensor data, and enriched personal communication, just to mention a few.

This workshop addresses the principal technical challenges in multi-camera processing when the video modality is also supported by other inputs such as audio, speech, context, depth sensors, and other modalities. The goal of the workshop is to gather high-quality contributions describing leading-edge research in joint capture and analysis of multi-sensor signals as well as to stimulate interaction among the participants through a panel discussion followed by a group discussion. Topics of interest to the workshop include:

  • Multi-camera and multi-modal systems and sensor fusion
  • Distributed sensing and processing methods for human-centric applications
  • Distributed multi-modal scene analysis and event interpretation
  • Automated annotation and summarization of multi-view video
  • Automated creation of audiovisual reports (from meetings, lectures, sport events, etc.)
  • Multi-modal gesture and speech recognition
  • Multi-modal human-computer interfaces
  • Data processing and fusion in distributed embedded systems
  • Context-awareness and behaviour modelling
  • Performance evaluation metrics
  • Applications in distributed surveillance, smart rooms, virtual reality, and e-health
Papers reporting on multi-camera networks (without multi-modal sensing) are also welcomed.

General chairs

  • Andrea Cavallaro => Queen Mary, U. of London
  • Hamid Aghajan => Stanford University

Program committee

  • Francois Bremond => INRIA, France
  • Josep Casas => UPC, Spain
  • Tanzeem Choudhury => Dartmouth College, USA
  • Maurice Chu => PARC, USA
  • C. De Vleeschouwer => UCL, Belgium
  • Pier Luigi Dragotti => Imperial College, UK
  • Pascal Frossard => EPFL, Switzerland
  • Luis Matey => CEIT, Spain
  • Jean Marc Odobez => IDIAP, Switzerland
  • James Orwell => Kingston U., UK
  • Wilfried Philips => U. of Gent, Belgium
  • Ronald Poppe => U. of Twente, Netherlands
  • Fatih Porikli => MERL, USA
  • Carlo Regazzoni => U. of Genoa, Italy
  • Rainer Stiefelhagen => U. of Karlsruhe, Germany
  • Ming-Hsuan Yang => Honda Research, USA
  • Li-Qun Xu => BT, UK

Sponsor

logo merl