Name: HEITOR DELESPORTE CONCEGLIERI
Publication date: 25/07/2023
Examining board:
Name | Role |
---|---|
ANSELMO FRIZERA NETO | Presidente |
CAMILO ARTURO RODRIGUEZ DIAZ | Examinador Interno |
FAUSTO ORSI MEDOLA | Examinador Externo |
RICARDO CARMINATI DE MELLO | Coorientador |
Summary: The number of visually impaired individuals in today’s society has been increasing, and vision is of paramount importance for an individual’s perception, orientation, and location in their environment. Assistive devices such as traditional canes and smart canes, which use ultrasonic and infrared sensors, and guide dogs are implemented to provide support to users. However, these devices lack the means to acquire essential information to assist in situations commonly reported by users, such as the detection and positioning of elevated obstacles. This study aims to present and validate a strategy for object detection and localization through images captured by an RGB-D camera. Additionally, challenges and potentials in the synergistic implementation of this strategy with devices already employed for mobility assistance are discussed. The use of RGB-D cameras provides relevant information through their depth and RGB sensors to complement the use of traditional devices and assist with users’ mobility and navigation. As variable system parameters (e.g., azimuth and polar angles, and depth parameter) can considerably affect
the system’s operation, two sets of experiments are conducted to validate the strategy and understand the impact of detection quality and system operation during navigation. For the experiments, the strategy is employed, and a new database with a newly trained class, ”Balloons,”is generated. Training the new class enabled the system to detect 100% of the trained class images with a minimum confidence threshold of 0.887, indicating a good detection capability. The strategy successfully detected static and dynamic objects, including elevated ones and those in the same camera axis. Results from the same experiments conducted while implementing the strategy show azimuth angle deviations with a maximum variation of 2% and depth parameter deviations ranging from 0.05% to 0.3%. These results demonstrate the feasibility of the proposed detection strategy for both static and dynamic scenarios, providing alternatives for new studies related to object detection for the assistance of visually impaired individuals. The generation of new datasets for system implementation in different scenarios, such as urban environments,
integration into various mobile computing platforms for convenience and mobility, and the implementation of other user feedback methods are some of the contributions of this research.