Adversarial examples in neural networks
                        
                            GPTKB entity
                        
                    
                Statements (49)
| Predicate | Object | 
|---|---|
| gptkbp:instanceOf | 
                                    
                                        
                                            gptkb:research
                                        
                                         | 
                            
| gptkbp:application | 
                                    
                                        
                                            
                                            security testing
                                        
                                        
                                         robustness evaluation  | 
                            
| gptkbp:challenge | 
                                    
                                        
                                            
                                            certifying robustness
                                        
                                        
                                         defending against adversarial attacks detecting adversarial examples  | 
                            
| gptkbp:concerns | 
                                    
                                        
                                            
                                            AI safety
                                        
                                        
                                         model vulnerability trustworthiness of AI systems  | 
                            
| gptkbp:defenseMechanism | 
                                    
                                        
                                            
                                            adversarial training
                                        
                                        
                                         robust optimization input preprocessing  | 
                            
| gptkbp:defines | 
                                    
                                        
                                            
                                            Inputs to machine learning models that are intentionally designed to cause the model to make a mistake.
                                        
                                        
                                         | 
                            
| gptkbp:field | 
                                    
                                        
                                            gptkb:artificial_intelligence
                                        
                                         gptkb:machine_learning  | 
                            
| gptkbp:firstDescribed | 
                                    
                                        
                                            
                                            2013
                                        
                                        
                                         Szegedy et al.  | 
                            
| gptkbp:impact | 
                                    
                                        
                                            
                                            can cause misclassification
                                        
                                        
                                         can reduce model accuracy  | 
                            
| gptkbp:method | 
                                    
                                        
                                            
                                            adding small perturbations to input data
                                        
                                        
                                         | 
                            
| gptkbp:notableBattle | 
                                    
                                        
                                            
                                            Carlini & Wagner attack
                                        
                                        
                                         DeepFool Fast Gradient Sign Method (FGSM) Jacobian-based Saliency Map Attack (JSMA) Projected Gradient Descent (PGD)  | 
                            
| gptkbp:notableContributor | 
                                    
                                        
                                            gptkb:Ian_Goodfellow
                                        
                                         gptkb:Christian_Szegedy gptkb:Dawn_Song gptkb:Alexey_Kurakin Nicholas Carlini Nicolas Papernot  | 
                            
| gptkbp:notablePublication | 
                                    
                                        
                                            
                                            Explaining and Harnessing Adversarial Examples (Szegedy et al., 2013)
                                        
                                        
                                         Towards Evaluating the Robustness of Neural Networks (Carlini & Wagner, 2017) Adversarial Machine Learning at Scale (Kurakin et al., 2016) Intriguing properties of neural networks (Szegedy et al., 2013)  | 
                            
| gptkbp:relatedConcept | 
                                    
                                        
                                            
                                            robustness
                                        
                                        
                                         explainability transferability black-box attack gradient masking white-box attack  | 
                            
| gptkbp:relatedTo | 
                                    
                                        
                                            
                                            deep learning
                                        
                                        
                                         neural networks  | 
                            
| gptkbp:trainer | 
                                    
                                        
                                            gptkb:CIFAR-10
                                        
                                         gptkb:ImageNet gptkb:MNIST  | 
                            
| gptkbp:bfsParent | 
                                    
                                        
                                            gptkb:Christian_Szegedy
                                        
                                         | 
                            
| gptkbp:bfsLayer | 
                                    
                                        
                                            
                                            8
                                        
                                        
                                         | 
                            
| https://www.w3.org/2000/01/rdf-schema#label | 
                                    
                                        
                                            
                                            Adversarial examples in neural networks
                                        
                                        
                                         |