Statements (55)
Predicate | Object |
---|---|
gptkbp:instanceOf |
explanation method
|
gptkbp:appliesTo |
any machine learning model
|
gptkbp:basedOn |
Shapley values
|
gptkbp:compatibleWith |
deep learning models
tree-based models |
gptkbp:hasCollaborationsWith |
true
|
https://www.w3.org/2000/01/rdf-schema#label |
SHAP
|
gptkbp:influenced |
game theory
cooperative game theory |
gptkbp:is_a_platform_for |
model interpretability
|
gptkbp:is_a_source_of |
true
|
gptkbp:is_a_time_for |
feature attribution
|
gptkbp:is_a_tool_for |
data interpretation
|
gptkbp:is_available_in |
dependence plots
feature contributions force plots summary plots |
gptkbp:is_integrated_with |
Jupyter notebooks
|
gptkbp:is_part_of |
gptkb:explainable_AI
data analysis workflows Lundberg's_research_on_model_interpretability |
gptkbp:is_popular_among |
data scientists
analysts AI researchers machine learning practitioners |
gptkbp:is_recognized_for |
Python
academic literature |
gptkbp:is_used_in |
healthcare
decision making finance marketing research papers neural networks risk assessment classification tasks data science predictive modeling feature selection AI ethics automated machine learning regression tasks black box models model comparison ensemble models algorithm transparency interpreting machine learning models model debugging linear_models |
gptkbp:isFacilitatedBy |
missing values
|
gptkbp:performance |
large datasets
|
gptkbp:produces |
gptkb:Scott_M._Lundberg
|
gptkbp:provides |
global feature importance
local feature importance |
gptkbp:supports |
multi-class classification
multi-label classification |