SHapley Additive exPlanations
GPTKB entity
Statements (26)
Predicate | Object |
---|---|
gptkbp:instanceOf |
explainable AI method
|
gptkbp:abbreviation |
gptkb:SHAP
|
gptkbp:appliesTo |
any machine learning model
|
gptkbp:availableOn |
gptkb:software
|
gptkbp:basedOn |
gptkb:Shapley_value
|
gptkbp:citation |
Lundberg, S. M., & Lee, S.-I. (2017). A Unified Approach to Interpreting Model Predictions. NeurIPS.
|
gptkbp:describes |
feature importance
|
gptkbp:field |
gptkb:artificial_intelligence
gptkb:machine_learning |
https://www.w3.org/2000/01/rdf-schema#label |
SHapley Additive exPlanations
|
gptkbp:introduced |
gptkb:Scott_Lundberg
|
gptkbp:introducedIn |
2017
|
gptkbp:openSource |
yes
|
gptkbp:output |
local explanations
global explanations |
gptkbp:purpose |
model interpretability
|
gptkbp:relatedTo |
gptkb:LIME
gptkb:explainable_AI |
gptkbp:repository |
https://github.com/slundberg/shap
|
gptkbp:usedFor |
feature attribution
model debugging trust in AI models |
gptkbp:uses |
game theory
|
gptkbp:bfsParent |
gptkb:SHAP
gptkb:SHAP_values |
gptkbp:bfsLayer |
7
|