Statements (35)
| Predicate | Object |
|---|---|
| gptkbp:instanceOf |
gptkb:field_of_artificial_intelligence
|
| gptkbp:alsoKnownAs |
gptkb:XAI
|
| gptkbp:appliesTo |
gptkb:machine_learning
deep learning |
| gptkbp:challenge |
complexity of AI models
trade-off between accuracy and interpretability |
| gptkbp:emergedIn |
2010s
|
| gptkbp:focusesOn |
making AI decisions understandable to humans
|
| gptkbp:goal |
enable human oversight
improve trust in AI increase transparency of AI systems |
| gptkbp:method |
gptkb:LIME
gptkb:SHAP feature importance counterfactual explanations rule-based explanations saliency maps |
| gptkbp:motive |
regulatory requirements
ethical concerns user trust |
| gptkbp:relatedTo |
ethics in AI
accountability in AI fairness in AI interpretable machine learning |
| gptkbp:standardizedBy |
gptkb:DARPA_XAI_program
gptkb:IEEE_standards gptkb:EU_AI_Act |
| gptkbp:usedIn |
autonomous vehicles
finance healthcare legal systems government decision-making |
| gptkbp:bfsParent |
gptkb:Song_Chun
|
| gptkbp:bfsLayer |
7
|
| https://www.w3.org/2000/01/rdf-schema#label |
Explainable AI
|