On-orbit Multi-Agent Simultaneous Estimation of Shape and Pose of Uncooperative Space Objects

ezgif.com-gif-maker.gif

Inspection and manipulation of uncooperative space objects using a team of spatially distributed spacecraft (chasers) is a crucial capability that enables a wide range of space applications, ranging from inspection of debris or repair of defunct satellites, to 3D mapping

and surface reconstruction of small celestial bodies. An initial requirement in all these applications is the need for a reliable relative pose estimation between chasers and target, together with a shape estimation of the latter. The ability to recognize previously observed target features (loop closure detection), under wide range of viewing angles and illumination variations, is crucial to reduce the drift caused by accumulation of errors in the estimations of both target shape and relative pose.
Over the past decades, loop closure detection has become an important component of Earth-based robotic applications, such as vision-based simultaneous localization and mapping approaches (SLAM). In order to tackle some of these issues, in this work I developed a visual intra- and inter-agent loop closure detection architecture to improve mapping and relative ego-motion estimation for a team of inspector spacecraft orbiting an uncooperative space object. 
The proposed method adopted and adapted a Bag-of-words (BoW) approach to build, online and incrementally, a dictionary that incorporates distinctive target information while exploring its shape.

Image Credit: NASA/JPL-Caltech

MintyUntimelyHerculesbeetle-size_restric

As a doctoral research fellow in the Aerial Mobility group at NASA Jet Propulsion Laboratory, I developed localization and mapping algorithms for a team of Mars rover and helicopter in a Mars analogue environment. Localization against large maps in GPS-denied and unstructured environments, safe and efficient navigation toward a target and coordination in multi-agent systems

are all key prerequisites in deploying autonomous mobile robots in many real-world environments. Collaborative localization and mapping enables a variety of terrestrial and planetary applications, ranging from autonomous aerial monitoring of vast areas on Earth (e.g., coastal monitoring, wildfire management), to self-driving technology and autonomous ground and aerial robots on Mars.

Vision-Based Localization of Mars Rover on Mars Helicopter Aerial Maps

DARPA Subterranean Challenge

C3E0356D-636B-479E-A322-6EA0F105DD09.JPG

The DARPA Subterranean or “SubT” Challenge is a robotic/software competition that seeks novel approaches to rapidly map, navigate, and search underground environments.

CoSTAR (a coalition of JPL, Caltech, MIT, KAIST) is one of the DARPA-funded teams participating in the systems track that revolves around the development and implementation of physical systems that will be tasked with the traversal, mapping, and search of different types of subterranean environments. 

As a member of the CoSTAR Perception team I work on collaborative localization and mapping for ground and aerial vehicles to explore unknown, GPS-denied and complex subterranean environments.

52786A4D-FD5E-4936-9FCB-B725944E6B3F.jpg

Earth, Moon and Mars Exploration

3CA0AD42-882C-4943-A614-30238DA40BE3.jpg
2B5A1F3E-5E9F-4F81-8EF1-FE3287609524.jpg

Self-driving Car Engineering

Autonomous vehicles promise a safer and more reliable road transportation and less car accidents.  In 2015 I joined the Autonomous Drive team at Nissan Research Center (NRC) in Silicon Valley.

During my two years with NRC, my work was mainly focused on vehicle perception systems. Specifically, I was in charge of working on the lidar system and integrating it with a Model Predictive Controller in order to detect, track and avoid stationary/moving obstacles.