gptkbp:instanceOf
|
gptkb:graphics_card
|
gptkbp:announced
|
gptkb:GTC_2024
2024
|
gptkbp:architecture
|
gptkb:Blackwell
|
gptkbp:chipletDesign
|
yes
|
gptkbp:energyEfficiency
|
improved over Hopper
|
gptkbp:flagshipModel
|
gptkb:Nvidia_GB200
|
https://www.w3.org/2000/01/rdf-schema#label
|
Nvidia Blackwell GPUs
|
gptkbp:manufacturer
|
gptkb:Nvidia
|
gptkbp:memoryBusWidth
|
>8 TB/s (GB200)
|
gptkbp:memoryType
|
gptkb:HBM3E
|
gptkbp:notableFeature
|
multi-GPU scalability
secure AI processing
advanced NVLink interconnect
second-generation transformer engine
|
gptkbp:notableModel
|
gptkb:B100
B200
GB200 Grace Blackwell Superchip
|
gptkbp:numberOfChiplets
|
2 GPU dies (GB200)
|
gptkbp:predecessor
|
Nvidia Hopper GPUs
|
gptkbp:processNode
|
gptkb:TSMC_4N
|
gptkbp:releaseDate
|
expected 2024
|
gptkbp:successor
|
Nvidia Hopper GPUs
|
gptkbp:supports
|
gptkb:Multi-Instance_GPU_(MIG)
gptkb:NVLink_5.0
gptkb:PCIe_5.0
gptkb:HBM3E_memory
gptkb:Tensor_Cores
FP16 precision
FP8 precision
INT8 precision
secure AI processing
confidential computing
BF16 precision
FP4 precision
FP6 precision
INT4 precision
transformer engine
|
gptkbp:targetMarket
|
gptkb:artificial_intelligence
data centers
|
gptkbp:transistorCount
|
208 billion (GB200)
|
gptkbp:usedFor
|
cloud computing
data analytics
high performance computing
large language models
generative AI
|
gptkbp:usedIn
|
gptkb:Nvidia_DGX_systems
gptkb:Nvidia_HGX_systems
|
gptkbp:bfsParent
|
gptkb:Nvidia_B100_GPU
|
gptkbp:bfsLayer
|
7
|