Text

Dependable AI in Safe Autonomous Systems

Data-driven development methods show great promise in producing accurate models for perception functions such as object detection and semantic segmentation, however most of them lack a holistic view for being implemented in dependable systems. This project proposal aims at producing Machine Learning (ML) models of robust nature to meet and stay ahead of emerging certification requirements.

Start

2022-01-01

Planned completion

2025-12-31

Collaboration partners

Research group

Project manager at MDU

No partial template found

Data-driven development methods show great promise in producing accurate models for perception functions such as object detection and semantic segmentation, however most of them lack a holistic view for being implemented in dependable systems. This project proposal aims at producing Machine Learning (ML) models of robust nature to meet and stay ahead of emerging certification requirements.

A large part of the accuracy and robustness of a trained model is due to the data it was trained on, yet most research today focuses on model architecture development. It is the intention of this project to emphasize the dataset side of the problem, including novel methods of data augmentation e.g. neural augmentation. Expected outputs of the project would be to set the basis of a safety-conscious ML system and provide the methodology to iterate and refine such systems.

Project objectives

The main objective is to produce Machine Learning models of robust nature to meet and stay ahead of emerging certification requirements.

This research relates to the following sustainable development goals