Statements (28)
| Predicate | Object |
|---|---|
| gptkbp:instanceOf |
gptkb:activation_function
|
| gptkbp:abbreviation |
gptkb:ReLU
|
| gptkbp:advantage |
can cause dead neurons
reduces vanishing gradient problem |
| gptkbp:category |
gptkb:machine_learning_concept
gptkb:mathematical_concept |
| gptkbp:contrastsWith |
sigmoid function
tanh function |
| gptkbp:form |
f(x) = max(0, x)
|
| gptkbp:inputRange |
(-∞, ∞)
|
| gptkbp:introduced |
gptkb:Hahnloser_et_al.
|
| gptkbp:introducedIn |
2000
|
| gptkbp:property |
computationally efficient
non-linear not differentiable at zero sparse activation |
| gptkbp:range |
[0, ∞)
|
| gptkbp:type |
gptkb:dying_ReLU_problem
|
| gptkbp:usedIn |
gptkb:artificial_neural_networks
convolutional neural networks deep learning feedforward neural networks |
| gptkbp:variant |
gptkb:Exponential_Linear_Unit
gptkb:Leaky_ReLU gptkb:Parametric_ReLU |
| gptkbp:bfsParent |
gptkb:ReLU
|
| gptkbp:bfsLayer |
7
|
| https://www.w3.org/2000/01/rdf-schema#label |
Rectified Linear Unit
|