Batch Normalization

GPTKB entity

Statements (51)
Predicate Object
gptkbp:instance_of gptkb:technique
gptkbp:applies_to deep learning
gptkbp:can_be_applied_during training phase
inference phase
gptkbp:can_be_combined_with residual networks
gptkbp:can_be_used_with dropout
gptkbp:can_lead_to better generalization
gptkbp:can_provide overfitting
gptkbp:developed_by gptkb:Christian_Szegedy
gptkb:Sergey_Ioffe
gptkbp:enhances gradient flow
https://www.w3.org/2000/01/rdf-schema#label Batch Normalization
gptkbp:improves training speed
gptkbp:includes scaling factor
bias term
gptkbp:introduced_in gptkb:2015
gptkbp:is_a_form_of layer normalization
gptkbp:is_a_key_component_of modern architectures
gptkbp:is_applied_in fully connected layers
recurrent neural networks
convolutional layers
sequence models
gptkbp:is_beneficial_for very deep networks
gptkbp:is_implemented_in gptkb:Tensor_Flow
gptkb:Keras
gptkb:Py_Torch
gptkbp:is_often_accompanied_by gptkb:neural_networks
gptkbp:is_often_used_in gptkb:generative_adversarial_networks
convolutional neural networks
gptkbp:is_part_of gptkb:neural_networks
gptkbp:is_related_to group normalization
instance normalization
gptkbp:is_used_in natural language processing
image classification
object detection
transfer learning
gptkbp:is_used_to_mitigate vanishing gradients
gptkbp:normalizes layer inputs
gptkbp:reduces internal covariate shift
gptkbp:requires computational overhead
gptkbp:sensitivity batch size
gptkbp:technique accelerating convergence
improves model stability
normalizes the output of a previous layer.
normalizing activations
reduces sensitivity to initialization
gptkbp:uses mini-batch statistics
gptkbp:bfsParent gptkb:neural_networks
gptkb:Deep_Learning
gptkb:Sergey_Ioffe
gptkbp:bfsLayer 4