SHAP (SHapley Additive exPlanations): SHAP is based on game theory and uses Shapley values to explain the output of any machine learning model. It assigns each feature an importance value for a particular prediction.
Key points:
- Consistent: Provides a unified measure of feature importance
- Global and local interpretability: Can explain both overall model behavior and individual predictions
- Theoretically sound: Based on solid mathematical foundations