DenseNet

GPTKB entity

Statements (50)
Predicate Object
gptkbp:instanceOf gptkb:convolutional_neural_network
gptkbp:activatedBy gptkb:ReLU
gptkbp:advantage reduces number of parameters
encourages feature reuse
improves information flow between layers
reduces vanishing-gradient problem
gptkbp:arXivID arXiv:1608.06993
gptkbp:block dense block
gptkbp:citation over 10,000
gptkbp:developedBy gptkb:Laurens_van_der_Maaten
gptkb:Gao_Huang
gptkb:Kilian_Q._Weinberger
gptkb:Zhuang_Liu
gptkbp:github https://github.com/liuzhuang13/DenseNet
gptkbp:growthForm number of filters added per layer
gptkbp:hasConcept connect each layer to every other layer in a feed-forward fashion
gptkbp:hasVariant gptkb:DenseNet-121
gptkb:DenseNet-169
gptkb:DenseNet-201
gptkb:DenseNet-264
https://www.w3.org/2000/01/rdf-schema#label DenseNet
gptkbp:inputToLayer concatenation of outputs from all previous layers
gptkbp:license gptkb:MIT_License
gptkbp:maximumDepth can be very deep (e.g., 121, 169, 201 layers)
gptkbp:normalization Batch Normalization
gptkbp:notableFor gptkb:Keras_Applications
gptkb:Torchvision
gptkbp:notablePublication gptkb:Densely_Connected_Convolutional_Networks
gptkbp:outputLayer Fully Connected Layer
gptkbp:parameterEfficiency high
gptkbp:platform gptkb:TensorFlow
gptkb:Keras
gptkb:PyTorch
gptkbp:pooling gptkb:Max_Pooling
Average Pooling
gptkbp:publicationDate gptkb:CVPR_2017
gptkbp:publishedIn 2017
gptkbp:relatedTo gptkb:ResNet
gptkb:VGGNet
gptkb:AlexNet
gptkbp:trainer gptkb:SVHN
gptkb:CIFAR-10
gptkb:CIFAR-100
gptkb:ImageNet
gptkbp:transitionLayer used between dense blocks
gptkbp:usedFor image classification
object detection
semantic segmentation
gptkbp:bfsParent gptkb:convolutional_neural_network
gptkbp:bfsLayer 5