Statements (17)
| Predicate | Object |
|---|---|
| gptkbp:instanceOf |
gptkb:problem_in_deep_learning
|
| gptkbp:affects |
ReLU activation function
|
| gptkbp:alsoKnownAs |
dead ReLU problem
|
| gptkbp:cause |
loss of network capacity
neurons output zero for all inputs |
| gptkbp:discusses |
deep learning literature
|
| gptkbp:mitigatedBy |
gptkb:Leaky_ReLU
gptkb:Parametric_ReLU Randomized ReLU lower learning rate proper weight initialization |
| gptkbp:occurredIn |
weights update causes neuron to only output zero
|
| gptkbp:relatedTo |
vanishing gradient problem
training deep neural networks |
| gptkbp:bfsParent |
gptkb:LeakyReLU
|
| gptkbp:bfsLayer |
7
|
| https://www.w3.org/2000/01/rdf-schema#label |
dying ReLU problem
|