Why Autonomous Robots?
Think about how frustrating it can be to play your favorite video game with lag. You input some command or move your controller, and you don't see the result of your action for several seconds. But what if your lag was much, much worse? We're not talking about a 2 second delay, but something closer to 14 minutes. Still interested in playing?
This is the kind of communication delay that Mars rover operators encounter on a daily basis. They send a command to a vehicle some 200 million kilometers from home at time T=0 minutes. At around T=7 minutes, the rover receives the command and sends an acknowledgement home. At time T=14 minutes, mission control receives the message from the rover and proceeds with their next action.
Q: So how do you drive a rover safely in these conditions?
A: Well, very slowly. There's no one around to repair the rover if we crash into an obstacle on Mars. But we can speed things up by adding some level of autonomy.
If we let the rover make some decisions for itself, we can eliminate the wait time associated with the Earth-Mars communication delay. Of course, the software must be robust and this is very much an active area of research. The Atlantic recently featured a nice article on Curiosity's AEGIS system, which allows the rover to better select its own science targets.
Mission Overview
Our mission is to design a navigation system for the successor to the Mars 2020 mission. One objective of the Mars 2020 mission is to package soil samples for an eventual return to Earth. Our rover must safely navigate the Martian environment and locate the samples to be returned.
We designed and implemented a ROS-based software package that allows a rover to semi-autonomously navigate around an unknown maze. The mission consists of two stages: a semi-autonomous mapping phase, and an autonomous navigation phase.
At the start of the mission, the rover receives a list of science targets from mission control. In the first phase, a human driver specifies objective waypoints near each science target. When a waypoint is set, the robot uses an A* path planner to navigate to the waypoint while also estimating the location of visible science targets.
Once each science target has been located, the second phase automatically begins. The rover switches to autonomous mode and must return to each science target in a specified order.
Rover & Environment
Mars rovers are expensive, so the Robotic Autonomy course staff provided us with TurtleBots. The TurtleBot features an RGBD vision sensor, which we feed into the gmapping SLAM package for estimating obstacles.
To represent the Martian environment, Ben Hockman built a maze in the Autonomous Systems Lab. Various science objectives are marked on the maze walls with AprilTag fiducials.
Implementation
Our rover software package consists of three major components: a supervisor, a navigator, and a controller.
The supervisor contains a state machine that monitors the robot's status, accepts science waypoints from the human in-the-loop, and handles the transition from semi-autonomous to autonomous mode. The supervisor is the highest level of the autonomous stack and issues commands to the navigator and controller.
The navigator processes the obstacle map received from gmapping and plans paths using geometric A*. Because geometric A* assumes the vehicle is a point mass, we add an additional buffer region to all obstacles to prevent collisions. The A* implementation is also designed to replan as infrequently as possible; the navigator will reuse existing paths until another waypoint is issued or new sensor data indicates the current path is no longer viable.
The controller is the lowest-level node in the ROS stack. This node receives paths from the navigator and issues velocity/angular velocity commands to the TurtleBot.
Results
Now for the moment of truth! We deployed our "rover" on the "surface of Mars" and achieved some exciting results:
So what happened in this test? Let's break it down.
Minute 1
The robot started in its semi-autonomous state and received a mission specification from the course staff. Kyle, our human-in-the-loop rover operator, issued a waypoint command toward the first science objective. The rover planned its own path and drove to the commanded waypoint. One interesting occurrence is that the TurtleBot took a longer path to avoid the corner of the maze. This is likely a result of our buffering system; the TurtleBot perceived the obstacles to be larger than they actually are, and deemed the corner to be impassable.
Minute 2
The TurtleBot finished scouting all of the science objectives, switching the system into autonomous mode. The TurtleBot began to autonomously traverse the science objectives in a specified order. All parts of the system performed extremely well!
Final Stretch
Of course, we hit our first obstacle as the TurtleBot entered the final stretch. You can see in the moments prior to the collision that the TurtleBot spends a lot of time readjusting itself. Again, the obstacle buffer radius makes the rover believe the gap between obstacles is much smaller than reality. But in the end, we arrived at the final objective and completed the mission. Success!
Takeaways
The entire team was extremely happy with our performance. We developed this system in 3 days (with less-than-optimal sleep levels). The test shown was only our second attempt running our software package on the hardware; with a few extra test runs to tune our controller gains and obstacle buffer radius, we know we could have achieved complete obstacle avoidance. Autonomy achieved!