Skip to content

Explain Model Predictions

Understanding why a model makes predictions is just as important as the predictions themselves. MLOps Desktop provides several explainability tools to interpret your models.

Time to complete: ~10 minutes

Consider a loan approval model that predicts “approve” or “deny.” Without explainability, you can’t answer:

  • Why was this applicant denied?
  • Which features are most important?
  • Is the model biased toward certain groups?

Explainability tools answer these questions.

  1. Open your trained pipeline

    Load a pipeline with a Trainer and Evaluator.

  2. Run the pipeline

    Train your model by clicking Run.

  3. Click the Evaluator node

    After training completes, the Evaluator node shows results.

  4. Open the Explain tab

    Click the Explain tab in the results panel.

    You’ll see three visualizations:

    • Feature Importance
    • SHAP Summary
    • Partial Dependence

What it shows: How much each feature contributes to predictions overall.

Feature Importance (Random Forest)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
petal length (cm) ████████████████████ 0.45
petal width (cm) ██████████████ 0.35
sepal length (cm) ████ 0.12
sepal width (cm) ███ 0.08

How to interpret:

  • Higher bars = more important features
  • For Random Forest, this is calculated by measuring how much each feature reduces impurity across all trees

SHAP (SHapley Additive exPlanations) explains individual predictions by showing how each feature pushes the prediction up or down.

Shows feature impact across all predictions:

SHAP Summary Plot
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
← decreases prediction increases prediction →
petal length ●●●●●●●○○○○○○○○●●●●●●●●●●
petal width ●●●●●○○○○○○○○●●●●●●●
sepal length ●●●○○○○○●●●
sepal width ●●○○○●●

Each dot is one sample:

  • Position (left/right): Feature’s impact on prediction
  • Color: Feature value (blue=low, red=high)

Reading the plot:

  • High petal length (red dots on right) → increases prediction
  • Low petal length (blue dots on left) → decreases prediction

Explains a single prediction step-by-step:

Prediction: Class 2 (Virginica)
Base value: 0.33 (average across all classes)
petal length = 5.1 +0.35 ▶▶▶▶▶▶▶
petal width = 1.8 +0.25 ▶▶▶▶▶
sepal length = 6.3 +0.05 ▶
sepal width = 2.5 -0.02 ◀
Final: 0.96 → Class 2

This shows exactly why a sample was classified as Virginica.

What it shows: How changing one feature affects predictions, holding all other features constant.

Shows relationship between one feature and the prediction:

Partial Dependence: petal length
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Prediction
│ ╭─────────
│ ╱
│ ╱
│─────╯
└────────────────────→ petal length
1 3 5 7

Interpretation: As petal length increases past ~2.5, the model predicts a different class.

Ask yourself:

  • Do the important features make domain sense?
  • Are any features surprisingly unimportant?
  • Does the decision boundary align with expectations?

Ask yourself:

  • Is the relationship linear or non-linear?
  • Are there threshold effects (sudden jumps)?
  • Do interactions make sense?

Watch out for:

Warning SignPossible Issue
Random feature is #1 importantData leakage or overfitting
ID column has high importanceModel memorizing, not learning
Unexpected feature interactionsCheck for data quality issues

Click Export to save visualizations as images for reports:

  • feature_importance.png
  • shap_summary.png
  • partial_dependence.png

Troubleshooting:

  • “shap not found” — Run pip install shap
  • SHAP taking too long — SHAP can be slow on large datasets. Try sampling fewer rows.
  • Plots not showing — Ensure matplotlib is installed: pip install matplotlib