gptkbp:instanceOf
|
explainable AI technique
|
gptkbp:application
|
tabular data
image data
text data
|
gptkbp:approach
|
local explanations
|
gptkbp:citation
|
over 8000
|
gptkbp:describedBy
|
gptkb:Why_Should_I_Trust_You?_Explaining_the_Predictions_of_Any_Classifier
|
gptkbp:field
|
gptkb:artificial_intelligence
gptkb:machine_learning
|
gptkbp:fullName
|
gptkb:Local_Interpretable_Model-agnostic_Explanations
|
https://www.w3.org/2000/01/rdf-schema#label
|
LIME
|
gptkbp:influenced
|
gptkb:SHAP
gptkb:Anchors
|
gptkbp:input
|
black-box model
|
gptkbp:introducedIn
|
2016
|
gptkbp:inventedBy
|
gptkb:Marco_Tulio_Ribeiro
gptkb:Carlos_Guestrin
gptkb:Sameer_Singh
|
gptkbp:language
|
gptkb:Python
|
gptkbp:license
|
gptkb:MIT_License
|
gptkbp:limitation
|
instability
approximate explanations
sensitivity to sampling
|
gptkbp:method
|
model-agnostic
post-hoc explanation
|
gptkbp:openSource
|
true
|
gptkbp:output
|
feature importance
interpretable model
|
gptkbp:purpose
|
model interpretability
|
gptkbp:relatedTo
|
gptkb:SHAP
gptkb:Anchors
counterfactual explanations
|
gptkbp:repository
|
https://github.com/marcotcr/lime
|
gptkbp:usedFor
|
feature selection
building trust in AI
debugging models
explaining predictions
|
gptkbp:bfsParent
|
gptkb:ベルガモ・オーリオ・アル・セーリオ空港
gptkb:Orio_al_Serio_Airport
|
gptkbp:bfsLayer
|
5
|