Statements (27)
| Predicate | Object |
|---|---|
| gptkbp:instanceOf |
gptkb:floating_point_format
|
| gptkbp:abbreviation |
TF32
|
| gptkbp:accuracy |
lower than FP32
|
| gptkbp:category |
computer arithmetic
numerical computing |
| gptkbp:compatibleWith |
gptkb:IEEE_754_single-precision
|
| gptkbp:defaultMathMode |
gptkb:NVIDIA_Ampere_Tensor_Cores
|
| gptkbp:exponentBits |
8
|
| gptkbp:introduced |
gptkb:NVIDIA
|
| gptkbp:introducedIn |
2020
|
| gptkbp:mantissaBits |
10
|
| gptkbp:platform |
gptkb:NVIDIA_Ampere_GPUs
gptkb:NVIDIA_Hopper_GPUs |
| gptkbp:purpose |
accelerate matrix operations
improve AI training speed |
| gptkbp:range |
same as FP32
|
| gptkbp:relatedTo |
FP16
FP32 bfloat16 |
| gptkbp:totalBits |
19
|
| gptkbp:usedFor |
deep learning
AI training |
| gptkbp:usedIn |
gptkb:NVIDIA_A100_GPU
gptkb:NVIDIA_Ampere_architecture |
| gptkbp:bfsParent |
gptkb:Nvidia_Blackwell
|
| gptkbp:bfsLayer |
6
|
| https://www.w3.org/2000/01/rdf-schema#label |
TensorFloat-32
|