ROS2 Robotic Handoff
A semester-long project for a robotics course at the University of Pittsburgh, this system coordinates two robots to autonomously transport an object through an unknown environment. An Interbotix robotic manipulator locates and picks up a randomly placed highlighter using image recognition, then hands it off to a TurtleBot 3 rover equipped with LIDAR. The rover navigates an unmapped maze and releases the highlighter at the designated endpoint. Built on ROS2, the system uses a generalized procedure to reliably coordinate both robots and consistently deliver the object to the correct drop-off location.
Interbotix arm setup and programming
We decided to use a unique camera setup for our robot arm to decrease points of failure and increase reliability. While other teams took the standard approach and mounted the camera in a static location, we mounted the camera to the robot arm itself. With the camera in a static location, the precise distance and orientation of the camera relative to the arm had to be calculated, additionally, the coordinates of the highlighter had to be accurately determined in two dimensions, and those coordinates had to be transformed relative to the robot arms origin. With the camera mounted to the arm, we simply had the arm slowly rotate in circle from the base position, then used a PID loop to position the highlighter in the center of the arm. Once it was positioned in the center, the arm extended from its folded position to maximum extension, guaranteeing that the highlighter was captured at some point along the traversal as long as it was in the maximum range of the arm. This meant we did not have to transform or measure any coordinates, and could use a PID loop to position the highlighter in the correct angular direction regardless of the radial distance. This resulted in significantly higher reliability of highlighter capture versus other groups
TurtleBot 3 setup and programming
The physical modifications to accommodate the highlighter we kept simple, instead of creating geometry for the highlighter to sit on, we used two walls made of popsicle sticks to contain the highlighter as the robot moved, testing proved that pushing the highlighter along the ground did not interfere with the robots ability to navigate the maze. Because the highlighter was not contained within the robot, releasing it was as simple as backing up. Traversal of the maze was accomplished with a left wall biased system that utilized two PID loops. One PID loop attempted to minimize the angular disparity between the wall and the robot, or in other words, tried to keep the robot as parallel to the wall as possible. The other loop attempted to keep the robot at a certain distance from the wall, speeding up either side of the robot in order to move back to the correct distance. After much testing and careful tuning of the PID system, oscillations were reduced and a system capable of consistently tracking walls of the maze was developed. There were some additional rules put in place for detecting corners in order to prevent the robot from spinning in circles, which happened occasionally. The final addition was a down facing webcam that detected the blue tape at the end of the maze and triggered a command to reverse the robot to release the highlighter. All together this system allowed the rover to consistently traverse simple mazes and release the highlighter in the correct location. We focused heavily on using smart physical design to reduce points of failure as much as possible. In doing so we achieved a higher accuracy than many other teams that took a standard approach and hoped to increase reliability simply through more complex algorythms.

