Lottery Ticket Hypothesis

GPTKB entity

Statements (30)
Predicate Object
gptkbp:instanceOf scientific theory
gptkbp:author Jonathan Frankle, Michael Carbin
gptkbp:citation gptkb:ICLR_2019
gptkb:The_Lottery_Ticket_Hypothesis:_Finding_Sparse,_Trainable_Neural_Networks
2019
gptkbp:debatedBy applicability to transfer learning
effectiveness in different architectures
generalizability to large-scale models
gptkbp:field gptkb:machine_learning
gptkbp:hasConcept Dense, randomly-initialized neural networks contain subnetworks that can be trained in isolation to reach comparable accuracy to the original network.
winning ticket
https://www.w3.org/2000/01/rdf-schema#label Lottery Ticket Hypothesis
gptkbp:influenced network initialization studies
pruning algorithms
research on efficient neural networks
gptkbp:method Iterative pruning and retraining
gptkbp:proposedBy gptkb:Jonathan_Frankle
gptkb:Michael_Carbin
gptkbp:publishedIn gptkb:International_Conference_on_Learning_Representations_(ICLR)
gptkbp:relatedTo deep learning
model compression
neural network pruning
gptkbp:testedBy gptkb:ImageNet_dataset
gptkb:MNIST_dataset
gptkb:CIFAR-10_dataset
image classification tasks
gptkbp:winningTicketDefinition A sparse subnetwork that can be trained from its original initialization to match the accuracy of the full network.
gptkbp:yearProposed 2018
gptkbp:bfsParent gptkb:Jonathan_Frankle
gptkbp:bfsLayer 7