- Yesterday, the European Union Aviation Safety Agency (EASA) and Daedalean published a report on Concepts of Design Assurance for Neural Networks.
- The report is the result of 10 months of work between EASA and Daedalean.
- The project aimed to investigate the challenges and concerns of using Neural Networks (NN) in aviation.
- EASA stated that some of the results of the project will serve as a key enabler towards the certification and approval of machine learning in safety-critical applications onboard aircraft.
Köln/Zürich March, 2020. Yesterday, Daedalean and EASA published a public extract of the report titled “Concepts of Design Assurance for Neural Networks (CoDANN)”.
The joint research project performed by the group of experts from EASA and Daedalean falls within a consistent effort by European authorities in adapting to the evolving technological landscape. The European Commission released their Ethics Guidelines for Trustworthy AI at the beginning of 2019. This created a mandate for EASA to investigate how they can certify “AI-based” applications within the existing regulatory framework. To find answers to this, EASA released their AI Roadmap in February, 2020. In line with this Roadmap, EASA set up the AI Task Force in 2019.
Challenges and Key Results of the Project
Artificial Intelligence (AI) provides major opportunities for the aviation industry, yet the trustworthiness of such systems needs to be guaranteed. While AI is a broad topic, the report investigates specifically the use of Machine Learning (ML) systems and Neural Networks (NN) in the context of the challenges outlined by the EASA AI Roadmap:
- Traditional Development Assurance frameworks are not adapted to machine learning;
- Difficulties in keeping a comprehensive description of the intended function;
- Lack of predictability and explainability of the ML application behavior;
- Lack of guarantee of robustness and of no ’unintended functions’;
- Lack of standardized methods for the evaluation of operational performance of the ML/DL applications;
- Issue of bias and variance in ML applications;
- Complexity of architectures and algorithms;
- Learning processes are adaptive.
The current aviation regulatory framework, and in particular, Development Assurance, does not provide means of compliance for systems based on machine learning. The core of the report published yesterday includes “Learning Assurance” guidelines (in contrast to traditional “Development Assurance”) to address the challenges and concerns of ML systems within one of Daedalean’s core developments: visual landing guidance. The presented guidelines aim to provide the initial building blocks for the future certification of AI systems.
“Our investigation allowed us to take a decisive step in defining a Learning Assurance framework, which is one of the fundamental building blocks of the EASA AI Roadmap for the creation of an ‘AI trustworthiness framework’” – says Guillaume Soudain who led the project at EASA.
The investigations are based on fundamental ML theory, with adaptations required for use cases in safety-critical aviation. The report includes an outline of realistic performance and safety assessments to define the failure tolerances, dataset sizes, etc. for the appropriate safety levels. The quantitative analyses show the feasibility of guaranteeing safety for neural networks at the appropriate levels of criticality.
“Our collaboration with EASA has created a solid foundation that has a realistic chance of paving the way for future use of ML in safety-critical applications in aviation and beyond,” says David Haber, Head of ML at Daedalean, who led the project from the company’s side. “We have considered non-trivial problems, yet more work is required to bring neural networks to full certification. Daedalean has significant expertise in building robust ML systems and showing that they are safe. We look forward to continuing our work with EASA.”
As EASA stated, their next step “will be to generalize, abstract, and complement these promising guidelines, in order to outline a first set of applicable guidance for safety-critical machine learning applications.” Daedalean and the Agency will continue their collaborative research.
“We were fortunate to draw upon the expertise inside the EASA AI Task Force,” says Luuk van Dijk, CEO and founder of Daedalean. “Our work with the group led by Guillaume Soudain was smooth and efficient. It firmly establishes EASA as leading the way among regulators in thinking about trustworthy AI.”
Daedalean AG, a start-up founded in 2016 and based in Zürich, works with eVTOL companies and aerospace manufacturers to specify, build, test and certify a fully autonomous autopilot system that can reliably and completely replace the human pilot. The company has developed systems demonstrating crucial early capabilities on a path to certification for airworthiness. As of December 2019, its team includes 30+ software engineers, as well as avionics specialists and pilots.
Contact for more information: Luuk van Dijk, CEO and founder, [email protected]
The European Union Aviation Safety Agency (https://www.easa.europa.eu/) has its mission to promote the highest common standards of safety and environmental protection in civil aviation. The Agency develops common safety and environmental rules at the European level. It monitors the implementation of standards through inspections in the Member States and provides the necessary technical expertise, training and research. The Agency works hand in hand with the national authorities which continue to carry out many operational tasks, such as certification of individual aircraft or licensing of pilots.