Autonomous Submersible ROV
During the summer of 2021, I got the privilege of working for the Applied Physics Laboratory at the University of Washington. Our project was to take 2 commercially available underwater drones (Remotely Operated Vehicles or ROVs) and see if they may be usable for use in further research applications. To demonstrate this, we were given the task of making one ROV follow the other around autonomously utilizing OpenCV and ROS. We also needed to determine the feasibility of integrating additional sensors into the ROV.
The following video was shot during our final demonstration. The ROV with the fiducial marker is being manually controlled while the other ROV is autonomously controlled.
One of the bigger obstacles for me to overcome was that I had very minimal experience working with OpenCV and ROS. During the duration of this internship I would learn and become much more comfortable with both software packages. One of the largest parts of this project was reverse engineering and understanding the preexisting architecture of the ROV enough to be able to integrate our own control code and external sensors into it. Additionally, this was a brand new project for the lab. None of our mentors or other interns had any previous experience with working with these ROVs, so a lot of the project was just figuring out the necessary knowledge to get to our final goal.
Through a lot of trial-and-error, we were able to make a custom interface that allowed both manual control of the ROV as well as the autonomous following of a fiducial marker (specifically, we used AruCo markers). This new interface specifically used ROS to communicate with and control the ROV.
The key feature of this new interface was that it implemented an autonomous mode for controlling the ROV. This mode would take input from the active camera and, using OpenCV, parse the incoming image for any of the fiducial markers we were using for this project. If any fiducial markers were found, the ROV would attempt to navigate to a point a set distance directly behind the marker. Because we knew the exact parameters of the camera, and the size of the fiducial marker, we could get the relative x, y and z distances of the marker from the camera as well as pitch, roll, and yaw. With this information, it was just a matter of programing in responses to any of these values straying away from our desired values. So for example, if the fiducial marker's z value started getting larger than the desired range, the ROV would autonomously move forward until the value was back within the range we wanted.
In the end, we managed to show off a demonstration of one of the ROVs autonomously following the other around! We also determined that the ROV would be a good platform for further research applications. It is fairly compact and customizable and can also handle integration with additional sensors. This, combined with the onboard Raspberry Pi, make the BlueROV2 a pretty attractive option for tether-less applications. It is also quite easy to integrate our own custom control code so the range of potential applications is quite large.
There was a lot more to this project than what I can quickly talk about here. This includes the initial building of the ROVs, troubleshooting hardware issues, the intern team's journey through learning ROS, the hardware and software integration of a stereo camera, tons of testing, and so much more. To read the full report, which goes much more in detail, checkout our Summer Interns Report: BlueROV Team.
