gptkbp:instance_of
|
gptkb:tachymetric_scale
|
gptkbp:application
|
computer vision
deep learning
natural language processing
reinforcement learning
generative models
|
gptkbp:architecture
|
AI accelerator
|
gptkbp:chip_size
|
4625 square millimeters
|
gptkbp:competitors
|
gptkb:Google_TPU
gptkb:Intel
gptkb:AMD
gptkb:NVIDIA
|
gptkbp:connects
|
high bandwidth
|
gptkbp:cooling_system
|
liquid cooling
|
gptkbp:design_purpose
|
maximize performance
|
gptkbp:developed_by
|
gptkb:Cerebras_Systems
|
gptkbp:form_factor
|
wafer scale
|
gptkbp:has_units
|
1,000,000
|
https://www.w3.org/2000/01/rdf-schema#label
|
Cerebras Wafer Scale Engine 2
|
gptkbp:impact
|
enhanced data processing
accelerated AI research
cost efficiency in AI training
improved model training
|
gptkbp:integration
|
AI workloads
|
gptkbp:is_scalable
|
high
|
gptkbp:manufacturing_process
|
7nm
|
gptkbp:market_position
|
leading AI accelerator
|
gptkbp:notable_feature
|
single chip design
|
gptkbp:part_of
|
Cerebras Systems product line
|
gptkbp:partnerships
|
universities
major cloud providers
AI research labs
|
gptkbp:performance
|
sub-millisecond
up to 220 petaflops
|
gptkbp:power_consumption
|
20 k W
|
gptkbp:provides_support_for
|
gptkb:Tensor_Flow
gptkb:Py_Torch
Cerebras Software Stack
|
gptkbp:ram
|
40 GB
10 TB/s
|
gptkbp:release_date
|
gptkb:2021
|
gptkbp:service
|
wafer scale AI chip
|
gptkbp:successor
|
gptkb:Cerebras_Wafer_Scale_Engine_1
|
gptkbp:target_market
|
data centers
|
gptkbp:transistor_count
|
2.6 trillion
|
gptkbp:user_base
|
cloud service providers
large enterprises
research institutions
|
gptkbp:year_established
|
gptkb:2021
|
gptkbp:bfsParent
|
gptkb:Cerebras_Systems
|
gptkbp:bfsLayer
|
5
|