Pixhawk launch new website for opensource vision based MAV autopilot

PixHawk UAV autopilot

PIXHAWK is both a student team, and a mature high-performance robotics toolkit. It has been built from scratch to optimally support computer vision on micro air vehicles. Our project is now developing since several semesters a computer vision focused robotics framework for micro air vehicles.

Project History and Goals

It all started off as small one-person ETH excellence scholarship project of Lorenz Meier. It however evolved during the first six months into a full-scale student team project with 8-14 students per semester. The PIXHAWK team was officially assembled January 2009. The project is  sponsored by the Computer Vision and Geometry Lab, and students are advised by Dr. Friedrich Fraundorfer and Prof. Marc Pollefeys. Additional travel expenses support is contributed by the Rektorat of the swiss ETH Zurich.The prototypes and platform are developed to test computer vision approaches on micro air vehicles (MAV), which are miniature unmanned air systems (UAS). To add some motivation, our student team participates each year in the IMAV competition. We are not only using a number of open-source projects for our work, but also releasing our hard- and software under an open source license.

Computer Vision

The focus of this project is to use computer vision on micro air vehicles (MAVs) in order to enable autonomous action. Our goal is to do all image processing onboard, leading to full autonomy. The computer vision draws from base technology such as the mechanical platform or the attitude estimation of the IMU. At the same time, it provides data to more complex tasks such as obstacle avoidance or simultaneous localization and mapping (SLAM). To reach this goal, several computer vision problems have to be tackled:

  • Localization of the vehicle in it’s environment
  • Detection obstacles in the current location dynamically
  • Building iteratively a global map

The PIXHAWK Robotics Toolkit

Our aerial robotics toolkit offers many computer-vision specific features, but covers also all the software, interfaces and structures to e.g. fly a fixed-wing aircraft with GPS. The optimizations for computer vision include:

  • A low-latency image buffer that allows to process an image in multiple processes
  • Hardware-synchronization of image and IMU information for tight IMU-vision fusion
  • Position Kalman filter on pxIMU that can cope with dropped frames and varying image processing times

Our system consists of three main building blocks: The AI/Vision codebase for computer vision, path planning and pattern recognition. The pxIMU firmware that includes PID-control, Kalman-filter estimation of attitude and position and the inertial measurement sensors (3D ACC, 3D Gyro, 3D Mag, Pressure). And QGroundControl, our open-source operator control unit. QGroundControl is cross-platform and runs on Windows, Linux and MacOS.


Discover more from sUAS News

Subscribe to get the latest posts sent to your email.

Gary Mortimer

Founder and Editor of sUAS News | Gary Mortimer has been a commercial balloon pilot for 25 years and also flies full-size helicopters. Prior to that, he made tea and coffee in air traffic control towers across the UK as a member of the Royal Air Force.