Adversarial Machine Learning
GPTKB entity
Statements (62)
Predicate | Object |
---|---|
gptkbp:instance_of |
gptkb:machine_learning
|
gptkbp:addressed |
Ensemble Methods
Adversarial Training Robust Optimization Input Preprocessing Model Regularization |
gptkbp:aims_to |
Improve Model Robustness
|
gptkbp:applies_to |
gptkb:neural_networks
|
gptkbp:can_lead_to |
Model Overfitting
Security Breaches Misclassification |
gptkbp:challenges |
Model Complexity
High Dimensionality Data Scarcity |
gptkbp:developed_by |
gptkb:researchers
Academics Industry Experts |
gptkbp:has_impact_on |
System Security
Model Performance User Trust |
https://www.w3.org/2000/01/rdf-schema#label |
Adversarial Machine Learning
|
gptkbp:includes |
Evasion Attacks
Inference Attacks Poisoning Attacks |
gptkbp:involves |
Attack Strategies
|
gptkbp:is_applied_in |
gptkb:Computer_Vision
gptkb:Natural_Language_Processing gptkb:speeches |
gptkbp:is_challenged_by |
Ethical Concerns
Public Perception Regulatory Issues |
gptkbp:is_evaluated_by |
Performance Metrics
Benchmark Datasets Robustness Tests |
gptkbp:is_explored_in |
gptkb:Workshops
Conferences Research Papers Theses |
gptkbp:is_influenced_by |
gptkb:strategy
Optimization Techniques Statistical Learning Theory |
gptkbp:is_promoted_by |
gptkb:Tutorials
gptkb:Workshops Online Courses Webinars Meetups |
gptkbp:is_related_to |
gptkb:security
Machine Learning Security Artificial Intelligence Ethics |
gptkbp:is_studied_in |
Defensive Techniques
Robustness Evaluation Attack Detection |
gptkbp:is_supported_by |
Research Grants
Open Source Libraries Industry Collaborations |
gptkbp:related_to |
Adversarial Examples
|
gptkbp:requires |
Data Augmentation
Model Training Evaluation Metrics |
gptkbp:used_in |
gptkb:security
|
gptkbp:bfsParent |
gptkb:Irvine_Machine_Learning_Group
|
gptkbp:bfsLayer |
5
|