SHapley Additive exPlanations
GPTKB entity
Statements (25)
| Predicate | Object |
|---|---|
| gptkbp:instanceOf |
gptkb:explainable_AI_method
|
| gptkbp:abbreviation |
gptkb:SHAP
|
| gptkbp:appliesTo |
any machine learning model
|
| gptkbp:availableOn |
gptkb:software
|
| gptkbp:basedOn |
gptkb:Shapley_value
|
| gptkbp:citation |
Lundberg, S. M., & Lee, S.-I. (2017). A Unified Approach to Interpreting Model Predictions. NeurIPS.
|
| gptkbp:describes |
feature importance
|
| gptkbp:field |
gptkb:artificial_intelligence
gptkb:machine_learning |
| gptkbp:introduced |
gptkb:Scott_Lundberg
|
| gptkbp:introducedIn |
2017
|
| gptkbp:openSource |
yes
|
| gptkbp:output |
local explanations
global explanations |
| gptkbp:purpose |
model interpretability
|
| gptkbp:relatedTo |
gptkb:LIME
gptkb:explainable_AI |
| gptkbp:repository |
https://github.com/slundberg/shap
|
| gptkbp:usedFor |
feature attribution
model debugging trust in AI models |
| gptkbp:uses |
game theory
|
| gptkbp:bfsParent |
gptkb:SHAP
|
| gptkbp:bfsLayer |
7
|
| https://www.w3.org/2000/01/rdf-schema#label |
SHapley Additive exPlanations
|