Estimating terrain traversability in off-road environments requires reasoning about complex interaction dynamics between the robot and these terrains. However, it is challenging to build an accurate physics model, or create informative labels to learn a model in a supervised manner, for these interactions. We propose a method that learns to predict traversability costmaps by combining exteroceptive environmental information with proprioceptive terrain interaction feedback in a self-supervised manner. Additionally, we propose a novel way of incorporating robot velocity in the costmap prediction pipeline. We validate our method in multiple short and large-scale navigation tasks on a large, autonomous all-terrain vehicle (ATV) on challenging off-road terrains, and demonstrate ease of integration on a separate large ground robot. Our short-scale navigation results show that using our learned costmaps leads to overall smoother navigation, and provides the robot with a more fine-grained understanding of the interactions between the robot and different terrain types, such as grass and gravel. Our large-scale navigation trials show that we can reduce the number of interventions by up to 57% compared to an occupancy-based navigation baseline in challenging off-road courses ranging from 400 m to 3150 m.
System Overview
During training, the network takes in patches cropped from a top-down colored map and height map along the driving trajectory, as well as the parameterized velocity corresponding to each patch. The network predicts a traversability cost for each patch, supervised by a pseudo ground-truth cost generated from IMU data. During testing, the whole map is subsampled into small patches, which are fed into the network to generate a dense, continuous costmap.
ATV Experiment: Short-Scale Navigation
In this experiment, the goal of the robot is to drive 200 m straight ahead. Using the baseline costmap and navigation stack, the robot takes the shoretest path and drives through the grass. With our learned costmaps, the robot steers around the patch of grass, since grass has a higher predicted cost than the dirt path.
ATV Experiment: Large-Scale Navigation
With the baseline costmap, the robot does not distinguish between different terrains of the same hiehgt, and cuts the corner to get to the waypoint faster. However, this corner is too tight and leads to an intervention. With our learned costmaps, the robot takes a wider turn in order to avoid vegetation and rough terrain, which leads to the robot taking a better line through the turn.
Warthog Experiment: Simple Navigation
In this experiment, the robot has to reach a goal 50m down a concrete path. The baseline navigation stack cuts directly through the grass to reach the goal, since it does not reason about the traversability of different types of terrain. Using our learned costmaps, the robot reasons that asphalt is smoother than grass and it remains on the asphalt to reach the goal.
Warthog Experiment: Fork in the Road
On the Forest-Fork course, the robot must choose between two paths to reach the goal. Note that the path on the left is covered with small obstacles, while the path on the right is clear of obstacles. With the baseline navigation stack, the robot chooses the cluttered, rough path on the left more often than the smooth path on the right. With our costmaps, the robot consistently chooses the smooth path on the right more often than the rough path on the left.
Large-Scale Navigation Trials
We provide full experiment runs for our large-scale navigation trials below.
Red Course
Length: 400 m. Flat terrain, wooded areas, tight turns
Red Course
Baseline
Total Interventions: 7
Red Course
Ours
Total Interventions: 3
Blue Course
Length: 3150 m. Hilly terrain, large gravel, some vegetation
Blue Course
Baseline
Total Interventions: 9
Blue Course
Ours
Total Interventions: 6
Green Course
Length: 950 m. Flat terrain, mostly vegetation
Green Course
Baseline
Total Interventions: 11
Green Course
Ours
Total Interventions: 7
Related Works
Triest, Sivaprakasam, Wang, Wang, Johnson, Scherer. TartanDrive: A Large-Scale Dataset for Learning Off-Road Dynamics Models. In ICRA 2022. [ArXiv]
Triest, Guaman Castro, Maheshwari, Sivaprakasam, Wang, Scherer. Learning Risk-Aware Costmaps via Inverse Reinforcement Learning for Off-Road Navigation In ICRA 2023. [ArXiv]
Citation Mateo Guaman Castro, Samuel Triest, Wenshan Wang, Jason M. Gregory, Felix Sanchez, John G. Rogers III, Sebastian Scherer. How Does It Feel? Self-Supervised Costmap Learning for Off-Road Vehicle Traversability. ICRA, 2023.
@misc{hdif2023,
title={How Does It Feel? Self-Supervised Costmap Learning for Off-Road Vehicle Traversability},
author={Guaman Castro, Mateo and Triest, Samuel and
Wang, Wenshan and Gregory, Jason M. and Sanchez, Felix and Rogers III, John G. and Scherer, Sebastian},
year={2023},
Booktitle={2023 International Conference on Robotics and Automation (ICRA)},
organization={IEEE}
}