In 2016, I graduated with a degree in Computer Engineering from Michigan Technological University after years of robotics research, internships, and spending time in nature! Since then, I’ve been working as an engineer and researcher changing the way that robots see the world.
For the next couple of paragraphs I’ll give an overview on my professional career through the lens of sensor fusion.
While in college, I designed and hand-built a data-collection system which used a Hokuyo lidar like this one which was designed to be mounted on a drone. It was used to map bridges in the Metro-Detroit Area like this one under the guidance of Dr. Timothy C. Havens. Here’s a picture of the drone with the sensor kit I developed on it.
Continuing that work, I added a camera to the same data collection system in order to help our lab test a camera-lidar sensor fusion system for pose estimation. Here’s a picture of it, but it’s from 2014, so I look very different.
Following college, I was part of the first intern class at Uber ATC (later Uber ATG, and then bought by Aurora) where we built an end-to-end self-driving system based purely on cameras. Following the internship, I transitioned to full time to continue this work.
After some time at Uber, I joined Argo AI as one of the original employees. Getting back to my camera-lidar roots, I started there by working with our team to develop C++ onboard infrastructure for everything from our monocular object detector to low-level lidar firmware.
I furthered my mapping work at Argo by working on a team which patented a method to develop cm-level precision ground height maps using a Gaussian Process, Poisson Surface Reconstruction, and several other hacks. My role in the project was to implement the Gaussian Process piece of the pipeline using the GPflow package.
I then spent a significant amount of my time working with Argo’s fantastic lidar team to develop algorithms for Argo’s custom Geiger-Mode lidar. Following my infra work, I was promoted and became a tech lead to team of engineers to ship Argo’s first deep-lidar based object detector. We successfully launched the detector in five different cities simultaneously.
Towards the end of my time at Argo I was part of a small team which pushed the state-of-the-art in field of stereo depth estimation by developing a novel approach for deep stereo vision on high-resolution images in real-time. Our work was then published at CVPR in 2019.
In addition to my algorithmic work on stereo, I also modified the auto-exposure for our cameras so that the stereo pair would expose at the same time. This was needed to for high-quality stereo correspondences.
I currently lead the radar-trucking perception working group. The group consists of a cross-functional team of engineers across hardware, software, and systems which work through issues involving our current radar, while pushing on designs for the next generation of hardware. Our working group has successfully launched several deep-radar based object detectors, field of view, and sensor fusion models into production. Our group has also been able to generalize several of these models across newer radars as they come online.
Throwing together all of my previous work, I was part of publishing Cramnet, a novel camera-radar early fusion object detector at ECCV ‘22. Our work helped to pave the way for long-range, principled, and efficient camera-radar fusion using dense imagery which is robust to various error modes.
Although I’ve had many other jobs and wonderful experiences throughout my career, I now consider myself to be a robotics engineer with a specialization in building machine learning algorithms for custom-built sensors. Ultimately, just like doing cool stuff and always happy to chat! If you want to learn more about me please use one of the links below, or just check out my Resume for more information.