Enhanced Vision SLAM for Wheeled Robots

In an article recently published in the journal PLOS One, researchers proposed a novel simultaneous localization and mapping (SLAM) algorithm, designated as a vision SLAM algorithm integrating multiple sensors (IMS-VSLAM), for wheeled robots in low-texture environments.

Study: Enhanced Vision SLAM for Wheeled Robots. Image credit: Julija Sh/Shutterstock
Study: Enhanced Vision SLAM for Wheeled Robots. Image credit: Julija Sh/Shutterstock

Traditional SLAM limitations

Wheeled robots are playing a critical role in driving robotic autonomy and intelligence. They can intelligently perform different tasks in industrial, logistics, and other fields. Wheeled robots improve work efficiency and , and they significantly reduce labor costs. They can also continuously work for long periods without rest. Control, localization, perception, and planning are the key technologies that influence wheeled robots.

However, achieving wheeled robot localization accuracy in complex, unknown environments is a significant challenge. SLAM is one of the common methods utilized for robot localization. This method leverages sensor data to determine robot positions and construct maps. However, SLAM algorithms encounter tracking loss and localization accuracy problems in low-texture scenarios.

The proposed approach

In this study, researchers proposed a multi-feature fusion-based SLAM optimization (OMFF-SLAM) algorithm to address the tracking loss problem faced by classical SLAM algorithms in low-texture environments. Then, this OMFF-SLAM algorithm was integrated with an inertial measurement unit (IMU), the visual-inertial fusion positioning optimization (VIFPO) algorithm, and multiple sensors to propose IMS-VSLAM for wheeled robots to improve these robots' localization accuracy.

Visual SLAM algorithms leverage image processing and machine vision to enable robots to perceive their surroundings in new environments to facilitate self-localization and map construction. Eventually, a two-wheeled robot (TWR) platform was developed to validate the performance of the proposed algorithms. The objective of the research was to introduce new algorithms that can effectively address the issues of tracking loss, reduced accuracy, and inferior tracking performance using conventional methods.

Additionally, an encoder and IMU were also utilized for visual motion trajectory tracking when lost to compensate for the phase machine shortcomings and provide a novel approach for wheeled robots with real-time requirements and higher accuracy. The proposed algorithms can improve the precision of navigation and localization, enabling navigation in real-time in complex environments. Thus, wheeled robots can respond more accurately and swiftly to changes in their surroundings in different practical applications.

The OMFF-SLAM algorithm workflow consisted of two major parts, including backend optimization and visual odometry. The visual odometry was responsible for processing dark and color images in real time, completing estimation and initialization of the current frame pose, and determining the insertion of the current frame into the backend as a keyframe.

Additionally, the backend optimization part was responsible for estimating state variables like keyframe points, surfaces, and lines in the local map using nonlinear optimization methods, reconstructing surfaces, lines, and points of the new map through triangulation, and keyframes. The IMS-VSLAM algorithm process included loop closure optimization, vision-IMU-encoder (V-IMU-E) joint initialization processing, V-IMU-E sliding window optimization, and multisensor preprocessing.

The TWR platform structure comprised two independently driven wheels, which were used to change the size, direction, and speed to enable the TWR to travel along the planned path, and rear wheels/passive wheels for stabilizing the TWR platform. A computer support axle and laser radar were installed at the rear and front, respectively, to adjust the field of view of the RealSense D435i camera.

Evaluation of the approach

Sparse indoor features (SIF) and regular indoor (RI) datasets were used to assess the OMFF-SLAM's effectiveness when the algorithm was applied to wheeled robots. A TWR robot was used to collect data with varying feature densities in indoor environments.

A comparative analysis was performed between the proposed OMFF-SLAM algorithm and common SLAM algorithms, including SLAM based on the fusion of binocular wire features and inertial navigation (BWFIN-SLAM) and improved SLAM based on light detection and ranging (LiDARO-SLAM). A slip verification experiment was performed to assess the application effectiveness and performance of the IMS-VSLAM algorithm by analyzing the slip detection capabilities of the algorithm.

The SIF and RI datasets were used to perform comparative experiments to investigate the IMS-VSLAM algorithm's application effectiveness in indoor scenarios. The comparative experiments involved the IMS-VLAM algorithm without closed-loop detection (WOCLD-IMS-VLAM), the IMS-VLAM algorithm with closed-loop detection (WCLD-IMS-VLAM), and the OMFF-SLAM algorithm.

The performance of these algorithms was also compared using the outdoor dataset (OD) to analyze the IMS-VSLAM algorithm's application effectiveness in outdoor environments. Moreover, researchers evaluated the effectiveness of the IMS-VSLAM algorithm in scenarios with complex snowy road scenes and moving obstacles.

Significance of the study

Results showed that the OMFF-SLAM algorithm achieved the highest average precision in the SIF and RI datasets. In the RI dataset, the algorithm outperformed the LiDARO-SLAM and BWFIN-SLAM algorithms by 16.34% and 9.83%, respectively. OMFF-SLAM also outperformed the LiDARO-SLAM and BWFIN-SLAM algorithms by 4.67% and 3.04%, respectively, in the SIF dataset.

Additionally, the OMFF-SLAM algorithm displayed a 12.6 ms reduction in processing time compared to the LiDARO-SLAM algorithm. The IMS-VLAM algorithm identified instances of slipping accurately in slip experiments.

Moreover, the IMS-VLAM algorithm showed an average processing time of 64.4 ms and an average precision of 85.4% in indoor datasets. The WCLD-IMS-VLAM algorithm displayed better performance compared to the WOCLD-IMS-VLAM algorithm in OD. Specifically, it outperformed the WOCLD-IMS-VLAM by 14.51% based on average precision, with IMS-VLAM showing an average processing time of 91.63 ms.

To summarize, the findings of this study demonstrated that the proposed OMFF-SLAM and IMS-VLAM algorithms can assist wheeled robots in realizing real-time performance and high accuracy in complex scenarios.

Journal reference:
Samudrapom Dam

Written by

Samudrapom Dam

Samudrapom Dam is a freelance scientific and business writer based in Kolkata, India. He has been writing articles related to business and scientific topics for more than one and a half years. He has extensive experience in writing about advanced technologies, information technology, machinery, metals and metal products, clean technologies, finance and banking, automotive, household products, and the aerospace industry. He is passionate about the latest developments in advanced technologies, the ways these developments can be implemented in a real-world situation, and how these developments can positively impact common people.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Dam, Samudrapom. (2024, April 07). Enhanced Vision SLAM for Wheeled Robots. AZoAi. Retrieved on May 20, 2024 from https://www.azoai.com/news/20240407/Enhanced-Vision-SLAM-for-Wheeled-Robots.aspx.

  • MLA

    Dam, Samudrapom. "Enhanced Vision SLAM for Wheeled Robots". AZoAi. 20 May 2024. <https://www.azoai.com/news/20240407/Enhanced-Vision-SLAM-for-Wheeled-Robots.aspx>.

  • Chicago

    Dam, Samudrapom. "Enhanced Vision SLAM for Wheeled Robots". AZoAi. https://www.azoai.com/news/20240407/Enhanced-Vision-SLAM-for-Wheeled-Robots.aspx. (accessed May 20, 2024).

  • Harvard

    Dam, Samudrapom. 2024. Enhanced Vision SLAM for Wheeled Robots. AZoAi, viewed 20 May 2024, https://www.azoai.com/news/20240407/Enhanced-Vision-SLAM-for-Wheeled-Robots.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Advancing Multi-Vehicle Dial-a-Ride Problems with Enhanced Deterministic Annealing Algorithm