Towards a Mixed-Reality framework for autonomous driving
Résumé
Testing autonomous driving algorithms on mobile systems in simulation is an essential step to validate the models and train the system for a large set of (possibly unpredictable and critical) situations. Yet, the transfer of the model from simulation to reality is challenging due to the reality gap (i.e., discrepancies between reality and simulation models). Mixed-reality environments enable testing models on real vehicles without taking financial and safety risks. Additionally, it can reduce the development costs of the system by providing faster testing and debugging for mobile robots. This paper proposes a preliminary work towards a mixed-reality framework for autonomous navigation based on RGB-D cameras. The aim is to represent the objects in two environments within a single display using an augmentation strategy. We tested a first prototype by introducing a differential robot able to navigate in its environment, visualize augmented objects and detect them correctly using a pre-trained model based on Faster R-CNN.
Origine : Fichiers produits par l'(les) auteur(s)