Cerebras Wafer Scale Engine 1
GPTKB entity
Statements (56)
Predicate | Object |
---|---|
gptkbp:instance_of |
gptkb:tachymetric_scale
|
gptkbp:application |
AI and machine learning
|
gptkbp:architecture |
CS-1
|
gptkbp:chip_type |
gptkb:Astra_Zeneca
|
gptkbp:connects |
high-speed interconnect
|
gptkbp:cooling_system |
liquid cooling
|
gptkbp:data_usage |
up to 100 TB/day
|
gptkbp:design_purpose |
maximize performance
reduce power consumption minimize latency |
gptkbp:development |
Cerebras engineering team
|
gptkbp:die_size |
4625 square millimeters
|
gptkbp:form_factor |
single chip
wafer scale |
gptkbp:fuel_economy |
60 TOPS/ Watt
|
gptkbp:has_units |
400,000
|
https://www.w3.org/2000/01/rdf-schema#label |
Cerebras Wafer Scale Engine 1
|
gptkbp:innovation |
energy-efficient design
real-time inference high bandwidth memory high-performance AI training on-chip memory custom compute architecture high-density chip design large-scale model training low-latency interconnects massive parallel processing wafer-scale integration |
gptkbp:integration |
integrated memory
|
gptkbp:is_scalable |
high scalability
|
gptkbp:manufacturer |
gptkb:Cerebras_Systems
|
gptkbp:market_launch |
gptkb:2020
|
gptkbp:market_segment |
high-performance computing
|
gptkbp:notable_feature |
largest chip ever made
|
gptkbp:part_of |
Cerebras Systems product line
|
gptkbp:performance |
up to 1.2 petaflops
1.2 PFLOPS |
gptkbp:power_consumption |
20 k W
|
gptkbp:processor |
custom architecture
|
gptkbp:provides_support_for |
gptkb:Tensor_Flow
gptkb:Py_Torch |
gptkbp:ram |
40 GB
10 TB/s |
gptkbp:release_year |
gptkb:2020
|
gptkbp:service |
gptkb:true
|
gptkbp:successor |
gptkb:Cerebras_Wafer_Scale_Engine_2
|
gptkbp:target_market |
data centers
|
gptkbp:technology |
AI accelerator
|
gptkbp:transistor_count |
1.2 trillion
|
gptkbp:use_case |
computer vision
deep learning natural language processing reinforcement learning |
gptkbp:year |
gptkb:2019
|
gptkbp:bfsParent |
gptkb:Cerebras_Wafer_Scale_Engine_2
|
gptkbp:bfsLayer |
6
|