On Github Stephan Sturges has released the latest version of a Free-to-use ground-level obstacle-detection segmentation AI for UAV which you can deploy today using cheap off-the-shelf sensors from Luxonis. He writes:-

The default neural network now features a 3-class output with the detection of humans on a separate output layer! This is to allow finer granularity obstacle avoidance: if you have to fall out of the sky you can now decide whether it’s best to drop your drone on top of a building or on someone’s head ;)

How can I use this?

You will need any Luxonis device with an RGB camera and the correct version of the depthai-python library installed for your platform and device combination. In terms of real-world use I would recommend that you get a device with a global shutter RGB camera with high light sensitivity and relatively low optical distortion.

Practical device recommendations:

If you do not yet own an OAK-series camera from Luxonis and want one to use with this repository, your best bet is to get an OAK-1 device modified with an OV9782 sensor with the “standard FOV”. This is how to do it:

  1. Go to the OAK-1 on the Luxonis store and add it to your cart https://shop.luxonis.com/collections/usb/products/oak-1
  2. Go the the “customization coupon” in the Luxonis store and add one of those https://shop.luxonis.com/collections/early-access/products/modification-cupon
  3. In your shopping cart, add “please replace RGB sensor with standard FOV OV9782” in the “instructions to seller” box

… and then wait a week or so for your global-shutter, fixed-focus, high-sensitivity sensor to arrive :)

Why?

In the amateur and professional UAV space there is a need for simple and cheap tools that can be used to determine safe emergency landing spots, avoiding crashes and potential harm to people.

How does it work?

The neural network performs pixelwise segmentation, and is trained from my own pipeline of synthetic data. This public version is trained on about 500Gb of data. There is a new version trained on 4T of data that I may publish soon, if you want to test it just contact me via email.

some examples of training images

Real world pics!

These are unfortunately all made with an old version of the neural network, but I don’t have my own drone to make more :-p The current gen network performs at least 5x better on a mixed dataset, and is a huge step up in real-world use.

(masked area is “landing safe”)

Full-fat version

FYI there is a more advanced version of OpenLander that I am developing as a commercial product, which includes depth sensing, IMU, more advanced neural networks, custom-developed sensors and a whole lot more stuff. If you’re intersted in that feel free to contact me via email (my name @ gmail).

Here’s a quick screengrab of deconflicting landing spots with depth sensing (this runs in parallel to the DNN system): depth_video.mov 

What about detection of X? Can you update the neural network?

There will be updates in the future, but I am also developing custom versions of the neural network for specific commercial use cases and I won’t be adding everything to OpenLander. OpenLander will remain free to use and is destined to improving safety of UAVs for all who enjoy using them!

Sources:

Some code taken from the excellent https://github.com/luxonis/depthai-experiments from Luxonis.

By Press