|
gptkbp:instanceOf
|
gptkb:Neural_Engine
|
|
gptkbp:architecture
|
custom AI processor
|
|
gptkbp:competitor
|
gptkb:NVIDIA_A100
gptkb:NVIDIA_H100
AMD Instinct MI200
|
|
gptkbp:connects
|
gptkb:RoCE
gptkb:Ethernet
|
|
gptkbp:formFactor
|
PCIe card
OAM module
|
|
gptkbp:manufacturer
|
gptkb:Habana_Labs
|
|
gptkbp:memoryBusWidth
|
2.45 TB/s
|
|
gptkbp:memoryType
|
gptkb:HBM2E
|
|
gptkbp:numberOfAIcores
|
24
|
|
gptkbp:numberOfMediaEngines
|
3
|
|
gptkbp:numberOfTensorProcessors
|
24
|
|
gptkbp:officialWebsite
|
https://habana.ai/products/gaudi2/
|
|
gptkbp:parentCompany
|
gptkb:Intel
|
|
gptkbp:platform
|
gptkb:TensorFlow
gptkb:PyTorch
gptkb:ONNX
|
|
gptkbp:powerSource
|
600W
|
|
gptkbp:predecessor
|
gptkb:Gaudi
|
|
gptkbp:processNode
|
7nm
|
|
gptkbp:RAM
|
96GB
|
|
gptkbp:releaseYear
|
2022
|
|
gptkbp:supports
|
deep learning inference
deep learning training
|
|
gptkbp:targetMarket
|
data centers
AI workloads
|
|
gptkbp:technology
|
SynapseAI
|
|
gptkbp:bfsParent
|
gptkb:Intel_Habana
gptkb:Gaudi_accelerators
gptkb:Intel_Gaudi_accelerators
|
|
gptkbp:bfsLayer
|
8
|
|
https://www.w3.org/2000/01/rdf-schema#label
|
Gaudi2
|