Cybenko theorem

GPTKB entity

Statements (19)
Predicate Object
gptkbp:instanceOf gptkb:mathematical_concept
gptkbp:activatedBy sigmoid
gptkbp:alsoKnownAs gptkb:Universal_Approximation_Theorem
gptkbp:appliesTo feedforward neural networks
single hidden layer networks
gptkbp:author gptkb:George_Cybenko
gptkbp:citation highly cited
gptkbp:field gptkb:machine_learning
gptkb:mathematics
neural networks
https://www.w3.org/2000/01/rdf-schema#label Cybenko theorem
gptkbp:impact foundation of neural network theory
gptkbp:journalPublished gptkb:Mathematics_of_Control,_Signals,_and_Systems
gptkbp:publicationYear 1989
gptkbp:relatedTo gptkb:Hornik's_theorem
gptkb:Kolmogorov–Arnold_representation_theorem
gptkbp:sentence A feedforward neural network with a single hidden layer containing a finite number of neurons can approximate any continuous function on compact subsets of R^n, given suitable activation functions.
gptkbp:bfsParent gptkb:George_Cybenko
gptkbp:bfsLayer 6