Hornik's theorem

GPTKB entity

Statements (16)
Predicate Object
gptkbp:instanceOf gptkb:mathematical_concept
gptkbp:activationFunctionRequirement nonconstant, bounded, and continuous
gptkbp:alsoKnownAs gptkb:Universal_Approximation_Theorem
gptkbp:author gptkb:Kurt_Hornik
gptkbp:citation thousands of research papers
gptkbp:field gptkb:machine_learning
neural networks
https://www.w3.org/2000/01/rdf-schema#label Hornik's theorem
gptkbp:impact established theoretical foundation for neural networks as universal function approximators
gptkbp:publicationYear 1989
gptkbp:publishedIn gptkb:Neural_Networks
gptkbp:relatedTo gptkb:Universal_Approximation_Theorem
Cybenko's theorem
gptkbp:sentence A feedforward neural network with a single hidden layer containing a finite number of neurons can approximate any continuous function on compact subsets of R^n, under mild assumptions on the activation function.
gptkbp:bfsParent gptkb:Cybenko_theorem
gptkbp:bfsLayer 7