uB-VisioGeoloc: An image sequences dataset of pedestrian navigation including geolocalised-inertial information and spatial sound rendering of the urban environment's obstacles

Category

Journal Article

Authors

Scalvini, F., Bordeau, C. , Ambard, M. , Migniot, C. , Vergnaud, M. , Dubois J.

Year

2024

Title

uB-VisioGeoloc: An image sequences dataset of pedestrian navigation including geolocalised-inertial information and spatial sound rendering of the urban environment's obstacles

Journal / book / conference

Data in Brief

Abstract

The dataset proposed is a collection of pedestrian navigation data sequences combining visual and spatial information. The pedestrian navigation sequences are situations encountered by a pedestrian walking in an urban outdoor environment, such as moving on the sidewalk, navigating through a crowd, or crossing a street when the pedestrian light traffic is green. The acquired data are timestamped provided RGB-D images and are associated with GPS, and inertial data (acceleration, rotation). These recordings were acquired by separate processes, avoiding delays during their capture to guarantee a synchronisation between the moment of acquisition by the sensor and the moment of recording on the system. The acquisition was made in the city of Dijon, France, including narrow streets, wide avenues, and parks. Annotations of the RGB-D are also provided by bounding boxes indicating the position of relevant static or dynamic objects present in a pedestrian area, such as a tree, bench, or person. This pedestrian navigation dataset aims to support the development of smart mobile systems to assist visually impaired people in their daily movements in an outdoor environment. In this context, the visual data and localisation sequences we provide can be used to elaborate the appropriate visual processing methods to extract relevant information about the obstacles and their current positions on the path. Alongside the dataset, a visual-to-auditory substitution method has been employed to convert each image sequence into corresponding stereophonic sound files, allowing for comparison and evaluation. Synthetic sequences associated with the same information set are also provided based on the recordings of a displacement within the 3D model of a real place in Dijon

Volume

53

Keywords

Pedestrian navigation, Virtual scene, Real scene, Camera RGB-D, GPS, IMU, Sonification, Artificial vision

relative links

  • https://www.sciencedirect.com/science/article/pii/S2352340924000611?via%3Dihub

Download

Download this publication in PDF format

‹ Back