Statements (66)
Predicate | Object |
---|---|
gptkbp:instance_of |
gptkb:Graphics_Processing_Unit
|
gptkbp:architecture |
CDNA 2
|
gptkbp:competes_with |
gptkb:NVIDIA_A100
|
gptkbp:connects |
PCIe 4.0
|
gptkbp:designed_for |
gptkb:cloud_computing
Deep learning High-Performance Computing Machine learning Scientific computing Big data analytics |
gptkbp:features |
gptkb:Infinity_Fabric
Tensor cores Advanced thermal management Dynamic power management Multi-threading capabilities High bandwidth memory (HBM) Customizable performance profiles Error correction code (ECC) memory Robust software ecosystem |
gptkbp:form_factor |
Dual-slot
|
gptkbp:has_units |
up to 128
|
https://www.w3.org/2000/01/rdf-schema#label |
AMD's MI200 series
|
gptkbp:includes |
Multi-die configuration
|
gptkbp:launch_event |
gptkb:SC21
|
gptkbp:manufacturer |
gptkb:AMD
|
gptkbp:memory_type |
HBM2 E
|
gptkbp:performance |
up to 47.9 TFLOPS
|
gptkbp:power_consumption |
up to 300 W
|
gptkbp:provides |
Scalability
High reliability Low latency Enhanced security features High throughput High memory bandwidth Flexible deployment options Comprehensive developer tools |
gptkbp:released_in |
gptkb:2021
|
gptkbp:supports |
gptkb:Tensor_Flow
gptkb:ROCm gptkb:Vulkan gptkb:Direct_X gptkb:Py_Torch gptkb:Open_CL Virtualization Machine learning frameworks Containerized applications AI workloads Data parallelism Edge computing Hybrid cloud environments Task parallelism Multi-GPU configurations Data science applications High-performance networking FP16 operations INT8 operations HPC workloads Simulation workloads Rendering workloads FP64 operations |
gptkbp:target_market |
Data centers
|
gptkbp:uses |
Chiplet design
|
gptkbp:vrready |
MI250
MI250 X |
gptkbp:bfsParent |
gptkb:NVIDIA's_Hopper_architecture
|
gptkbp:bfsLayer |
6
|