Demonstrating CropFollow++
Robust Under-Canopy Navigation with Keypoints
1 University of Illinois at Urbana-Champaign
|
Correspondence to {av7, girishc}@illinois.edu
|
Outstanding Demo Paper Award Finalist
|
|
We present an empiricially robust vision-based navigation system for under-canopy agricultural robots using semantic keypoints.
Autonomous under-canopy navigation is challenging due to the tight spacing between the crop rows (~0.75 m), degradation in RTK-GPS accuracy due to multipath error, and noise in LiDAR measurements from the excessive clutter.
Earlier work called CropFollow addressed these challenges by proposing a learning-based visual navigation system with end-to-end perception.
However, this approach has the following limitations: Lack of interpretable representation, and Sensitivity to outlier predictions during occlusion due to lack of a confidence measure.
Our system, CropFollow++, introduces modular perception architecture with a learned semantic keypoint representation.
This learned representation is more modular, and more interpretable than CropFollow, and provides a confidence measure to detect occlusions.
CropFollow++ significantly outperformed CropFollow in terms of the number of collisions (13 vs. 33) in field tests spanning ~1.9km each in challenging late-season fields with significant occlusions.
We also deployed CropFollow++ in multiple under-canopy cover crop planting robots on a large scale (25 km in total) in various field conditions and we discuss the key lessons learned from this.
The camera RGB image is used as input to our neural network model that predicts keypoints to locate the crop rows. The keypoints are used to create a trajectory that is used as the reference for an MPC to navigate the robot.