TRUSTY - Trustworthy intelligent system for remote digital towers
The goal of the TRUSTY project is to provide adaptation in the level of transparency to enhance the trustworthiness of AI-powered decisions in the context of remote digital towers (RDTs).
Project manager at MDU
Overall, the goal of TRUSTY is to provide adaptation in the level of transparency to enhance the trustworthiness of AI-powered decisions in the context of remote digital towers (RDTs). While in an actual tower, operators have direct visual access to the taxiway and runway monitoring, the RDTs concept only provides such information through video transmission with a warning and the corresponding explanation. To deliver trustworthiness in an AI-powered intelligent system TRUSTY will consider several approaches, and they are listed:
- ‘Self-explainable and Self-learning’ system for critical decision-making
- ‘Transparent ML’ models incorporating interpretability, fairness, and accountability
- ‘Interactive data visualization and multimodal human-machine interface/interactions (HMI), i.e., Graphical User Interface (GUI)’ for smart and efficient decision support
- ‘Adaptive level of explanation’ regarding the user's cognitive state.
- “HCAI” to enhance the trustworthiness of AI-powered systems.
- “Human-machine collaboration (HMC) or Human-AI teaming (HAIT)” to consider user feedback to insure some computation flexibility and the users’ acceptability.
TRUSTY will rely on the SotA developments in interactive data visualization, and usercentric explanation and on recent technological improvements in accuracy, robustness, interpretability, fairness, and accountability. We will apply information visualization techniques like visual analytics, data-driven storytelling, and immersive analytics in human-machine interactions (HMI). Thus, this project is at the crossroad of trustworthy AI, multimodel machine learning, active learning, and UX for human and AI model interaction.