top of page

Autonomous Underwater Torpedo

As a team of 4, in 3.5 months, we created an underwater autonomous torpedo to compete in a race through an underwater obstacle course in the shortest time. The entire system is built from the ground up for autonomy, including the fully custom mechanical system, embedded system and autonomy stack.

​

As team lead, I defined the product scope based on an analysis of the competition and through working with the team on understanding timelines. I also independently developed the autonomy stack of the vehicle. Code can be found on my Github.

​

​

Design Overview

​​​System Architecture​

  • Two fisheye cameras, one in the front, one in the rear, each with a dedicated processor (Raspberry Pi) run perception algorithms to locate vision targets along the course in the field of view

  • Perception data along with IMU and depth sensor data is sent over USB to an Android phone which runs the autonomy stack to control the vehicle and navigate the course

  • After running through the autonomy stack (outlined in detail below), the desired thrust from each of the 6 thrusters is sent to the embedded system on a custom PCB

  • 6 custom brushless sensored motor controllers interpret the desired thrust and control the motor based on calibrated thrust mappings created offline

  • The thruster configuration allows for the vehicle to move in any direction as well as rotate to any orientation, all 6 degrees of freedom in 3D space

​

Autonomy Stack Architecture

  • ROSJava is used on the android phone running the autonomy stack to allow for flexible code implementation as well as to use available visualization and networking tools, decreasing infrastructure requirements 

  • Raw camera data is first independently processed using OpenCV on each camera's dedicated Raspberry Pi. The location of the vision targets relative to the vehicles body reference frame is sent to the Android phone

  • Perception data along with raw orientation and depth data is incorporated in a particle filter localization scheme

  • A map containing the known locations of the vision targets is used in a scan registration â€‹algorithm to estimate the location of the vehicle. This helps calculate the cost function used by the particle filter observer and to perform more accurate particle propagation 

  • Once the vehicles location is estimated, a pure pursuit algorithm determines the desired location and orientation to move to, based on a fixed pre-computed optimal trajectory that will take the vehicle on a safe path through the course

  • The error between the current and desired vehicle state is used by a closed loop multivariable controller (LQR + integrator) to command the thrusters in a way that minimizes tracking error

image33.png
Autonomy High Level (1).png

Related Project Links

I was the Engineering Lead of the University of Waterloo's Formula Electric team (FSAE). I lead the engineering efforts of the 2018 vehicle, worked toward establishing a new team culture, and picked up on any project that needed extra hands.

When learning more about modern control theory, I wanted to try a more challenging project that I had wanted to design for a while.  Using Simulink, I designed a simulation along with an LQR controller to balance a double pendulum even when subjected to noise. 

Please reload

I tested robot localization algorithms in ROS (robot operating system) including particle filter based localization using a Kalman filter on state change and LiDAR based scan matching using ICP for prediction and the ICP fitness score for resampling

Please reload

bottom of page