Use Case: Building Explainable AI to Meet FDA Requirements

The Need for Explainability in Healthcare AI

The FDA requires any AI or machine learning models used for healthcare and medicine to be transparent and explainable. Black box models that lack interpretability will not gain regulatory approval.

This is because doctors need to understand why an AI model arrived at a particular diagnosis or treatment recommendation. Patient health and safety depends on it.

A Multivariate Approach to Medical Diagnosis

One technique is to train deep learning models on both medical images and related diagnostic tests as input. This better reflects how doctors make diagnoses using a variety of data.

The model can find connections between imaging features, lab tests, patient data, and conditions. This enables more holistic and accurate diagnosis.

Deriving Explainable Rules Through Inductive Logic Programming

Inductive logic programming can be used to analyze the model's positive and negative diagnoses as examples. From these examples, logical rules can be automatically inferred that characterize the criteria the model uses to make diagnoses.

For instance, if the model diagnoses pneumonia when certain lung image features and lab results are present, inductive logic programming can induce rules like:

IF lung_opacity > 0.7 AND WBC > 11000 THEN pneumonia = TRUE

These induced logical rules act as explanations for the model's diagnostic predictions. They make the model more interpretable by surfacing the key reasons behind the AI's reasoning in an intuitive rule-based format.

So inductive logic programming provides a method to reverse engineer the patterns in the model’s decision-making and extract explanatory rules from them. This makes the model more transparent and satisfies the FDA's requirements for explainable AI.

Hybrid AI for Accuracy and Explainability

Combining the strengths of deep neural networks and logic programming results in a hybrid AI system with state-of-the-art accuracy as well as interpretability.

The AI can leverage complex medical data for enhanced diagnosis. Meanwhile, the logic component offers transparency into the model's predictions, satisfying FDA requirements.

We believe this interdisciplinary approach highlights how different AI techniques can work together to create trustworthy and explainable AI suitable for high-stakes healthcare applications.