Explain Model Predictions
Understanding why a model makes predictions is just as important as the predictions themselves. MLOps Desktop provides several explainability tools to interpret your models.
Time to complete: ~10 minutes
Prerequisites
Section titled “Prerequisites”- Completed the Train a Classifier tutorial
- Python packages:
pip install shap matplotlib
Why Explainability Matters
Section titled “Why Explainability Matters”Consider a loan approval model that predicts “approve” or “deny.” Without explainability, you can’t answer:
- Why was this applicant denied?
- Which features are most important?
- Is the model biased toward certain groups?
Explainability tools answer these questions.
Enable Explainability
Section titled “Enable Explainability”-
Open your trained pipeline
Load a pipeline with a Trainer and Evaluator.
-
Run the pipeline
Train your model by clicking Run.
-
Click the Evaluator node
After training completes, the Evaluator node shows results.
-
Open the Explain tab
Click the Explain tab in the results panel.
You’ll see three visualizations:
- Feature Importance
- SHAP Summary
- Partial Dependence
Feature Importance
Section titled “Feature Importance”What it shows: How much each feature contributes to predictions overall.
Feature Importance (Random Forest)━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━petal length (cm) ████████████████████ 0.45petal width (cm) ██████████████ 0.35sepal length (cm) ████ 0.12sepal width (cm) ███ 0.08How to interpret:
- Higher bars = more important features
- For Random Forest, this is calculated by measuring how much each feature reduces impurity across all trees
SHAP Values
Section titled “SHAP Values”SHAP (SHapley Additive exPlanations) explains individual predictions by showing how each feature pushes the prediction up or down.
Summary Plot
Section titled “Summary Plot”Shows feature impact across all predictions:
SHAP Summary Plot━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ← decreases prediction increases prediction →petal length ●●●●●●●○○○○○○○○●●●●●●●●●●petal width ●●●●●○○○○○○○○●●●●●●●sepal length ●●●○○○○○●●●sepal width ●●○○○●●Each dot is one sample:
- Position (left/right): Feature’s impact on prediction
- Color: Feature value (blue=low, red=high)
Reading the plot:
- High petal length (red dots on right) → increases prediction
- Low petal length (blue dots on left) → decreases prediction
Waterfall Plot
Section titled “Waterfall Plot”Explains a single prediction step-by-step:
Prediction: Class 2 (Virginica)
Base value: 0.33 (average across all classes)
petal length = 5.1 +0.35 ▶▶▶▶▶▶▶petal width = 1.8 +0.25 ▶▶▶▶▶sepal length = 6.3 +0.05 ▶sepal width = 2.5 -0.02 ◀
Final: 0.96 → Class 2This shows exactly why a sample was classified as Virginica.
Partial Dependence Plots
Section titled “Partial Dependence Plots”What it shows: How changing one feature affects predictions, holding all other features constant.
Shows relationship between one feature and the prediction:
Partial Dependence: petal length━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━Prediction │ ╭───────── │ ╱ │ ╱ │─────╯ └────────────────────→ petal length 1 3 5 7Interpretation: As petal length increases past ~2.5, the model predicts a different class.
Shows interaction between two features:
Partial Dependence: petal length × petal width━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━petal width │ ░░░░░████████ │ ░░░░░████████ │ ░░░░░████████ │ ░░░░░░░░░████ └──────────────→ petal length
░ = Class 0 █ = Class 2Shows the decision boundary between classes.
Interpreting Explanations
Section titled “Interpreting Explanations”For Classification
Section titled “For Classification”Ask yourself:
- Do the important features make domain sense?
- Are any features surprisingly unimportant?
- Does the decision boundary align with expectations?
For Regression
Section titled “For Regression”Ask yourself:
- Is the relationship linear or non-linear?
- Are there threshold effects (sudden jumps)?
- Do interactions make sense?
Red Flags
Section titled “Red Flags”Watch out for:
| Warning Sign | Possible Issue |
|---|---|
| Random feature is #1 important | Data leakage or overfitting |
| ID column has high importance | Model memorizing, not learning |
| Unexpected feature interactions | Check for data quality issues |
Export Explanations
Section titled “Export Explanations”Click Export to save visualizations as images for reports:
feature_importance.pngshap_summary.pngpartial_dependence.png
Next Steps
Section titled “Next Steps”Troubleshooting:
- “shap not found” — Run
pip install shap - SHAP taking too long — SHAP can be slow on large datasets. Try sampling fewer rows.
- Plots not showing — Ensure
matplotlibis installed:pip install matplotlib