Positional Tracking Study of Greenhouse Mobile Robot Based on Improved Monodepth2
This paper presents a self-supervised monocular position tracking model tailored for greenhouse environments. These environments pose unique challenges: mutual crop shading and homogeneous color textures complicate feature extraction, resulting in blurred depth map boundaries and low-precision posit...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/11029014/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper presents a self-supervised monocular position tracking model tailored for greenhouse environments. These environments pose unique challenges: mutual crop shading and homogeneous color textures complicate feature extraction, resulting in blurred depth map boundaries and low-precision position estimation. Building upon the Monodepth2 baseline, the model incorporates three key enhancements: replacing the original backbone with ResNext50 to improve global information acquisition; integrating a hybrid convolution module (HC) into the encoder to expand the receptive field and capture multi-scale contextual features; and introducing a coordinate attention mechanism (CA) in the decoder to enhance discriminative feature extraction. Experiments conducted on a wheeled robot platform in a strawberry greenhouse demonstrate significant improvements: compared to the original backbone, the proposed model reduces position and attitude RMSE by 0.038 m and 0.012 rad, respectively. When compared to a baseline without HC, relative RMSE decreases by 0.048 m and 0.017 rad, while the CA-augmented version achieves RMSE reductions of 0.059 m and 0.034 rad compared to the CA-free variant. These results surpass existing monocular tracking methods, offering a technical foundation for vision system designs in greenhouse mobile robotics. |
---|---|
ISSN: | 2169-3536 |