Dr. Paul Costa presented a seminar on Explainable AI Systems – a BN/MEBN approach to Mission Critical Decision Support Systems 24 Sep 2021.
Presented to The Univeristy of Tennessee Knoxville
The success of AI-based decision support systems is often largely contingent on how much the decision maker has confidence in its recommendations. Especially in mission-critical systems (e.g. those involving risk of life), decision makers would only trust what the system is telling them to do if they can understand the process by which the system arrived to its conclusions. A natural way of expressing this process is via visual explanations of the presented results, which in the vast majority of cases must be done without assuming the decision maker is conversant in the techniques and algorithms used by the system. Unfortunately, the underlying models in such systems are not easy to understand and can be sometimes counter-intuitive for humans, becoming hard to interpret and explore. Explainable AI systems aim at allowing users to effectively understand, trust, and operate the models. This presentation will cover a recognizable model for qualitatively displaying probabilistic information is the Bayesian Network (BN) that is currently in development under a DARPA project. The technique provides a graphical visualization of quantitative beliefs about the conditional dependence and independence among random variables. Also in this session, I will be providing an overview of the research topics I am currently involved with, including Cybersecurity of advanced manufacturing and supply chain networks, as well as the evaluation of uncertainty in cybersecurity systems.