Statements (28)
Predicate | Object |
---|---|
gptkbp:instanceOf |
activation function
|
gptkbp:abbreviation |
gptkb:ReLU
|
gptkbp:advantage |
can cause dead neurons
reduces vanishing gradient problem |
gptkbp:category |
gptkb:mathematical_concept
machine learning concept |
gptkbp:contrastsWith |
sigmoid function
tanh function |
gptkbp:form |
f(x) = max(0, x)
|
https://www.w3.org/2000/01/rdf-schema#label |
Rectified Linear Unit
|
gptkbp:inputRange |
(-∞, ∞)
|
gptkbp:introduced |
gptkb:Hahnloser_et_al.
|
gptkbp:introducedIn |
2000
|
gptkbp:property |
computationally efficient
non-linear not differentiable at zero sparse activation |
gptkbp:range |
[0, ∞)
|
gptkbp:type |
gptkb:dying_ReLU_problem
|
gptkbp:usedIn |
gptkb:artificial_neural_networks
convolutional neural networks deep learning feedforward neural networks |
gptkbp:variant |
gptkb:Exponential_Linear_Unit
gptkb:Leaky_ReLU gptkb:Parametric_ReLU |
gptkbp:bfsParent |
gptkb:ReLU
|
gptkbp:bfsLayer |
6
|