Improved QT-Opt Algorithm for Robotic Arm Grasping Based on Offline Reinforcement Learning

Reinforcement learning plays a crucial role in the field of robotic arm grasping, providing a promising approach for the development of intelligent and adaptive grasping strategies. Due to distribution shift and local optimum in action, traditional online reinforcement learning is difficult to use e...

Full description

Saved in:
Bibliographic Details
Main Authors: Haojun Zhang, Sheng Zeng, Yaokun Hou, Haojie Huang, Zhezhuang Xu
Format: Article
Language:English
Published: MDPI AG 2025-05-01
Series:Machines
Subjects:
Online Access:https://www.mdpi.com/2075-1702/13/6/451
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Reinforcement learning plays a crucial role in the field of robotic arm grasping, providing a promising approach for the development of intelligent and adaptive grasping strategies. Due to distribution shift and local optimum in action, traditional online reinforcement learning is difficult to use existing grasping datasets, leading to low sample efficiency. This study proposes an improved QT-Opt algorithm for robotic arm grasping based on offline reinforcement learning. This improved algorithm proposes the Particle Swarm Optimization (PSO) to identify the action with the highest value within the robotic arm’s action space. Furthermore, a regularization term is proposed during the value iteration process to facilitate the learning of a conservative Q-function, enabling precise estimation of the robotic arm’s action values. Experimental results indicate that the improved QT-Opt algorithm achieves higher average grasping success rates when trained on multiple offline grasping datasets and demonstrates improved stability throughout the training process.
ISSN:2075-1702