gptkbp:instance_of
|
gptkb:Tensor_Flow
|
gptkbp:application
|
deep learning
|
gptkbp:architecture
|
gptkb:Astra_Zeneca
parallel processing
|
gptkbp:availability
|
gptkb:Google_Cloud
|
gptkbp:cloud_integration
|
gptkb:Google_Cloud_AI
|
gptkbp:collaboration
|
academic institutions
research labs
|
gptkbp:compatibility
|
gptkb:Google_Cloud_Platform
|
gptkbp:connects
|
high-speed network
|
gptkbp:data_type
|
gptkb:Tensor_Flow
|
gptkbp:design_purpose
|
accelerate ML workloads
|
gptkbp:designed_by
|
gptkb:Google
|
gptkbp:form_factor
|
PCIe card
|
gptkbp:fuel_economy
|
high
|
gptkbp:has_a_focus_on
|
AI and ML
|
gptkbp:has_units
|
2
|
https://www.w3.org/2000/01/rdf-schema#label
|
TPU v2
|
gptkbp:impact
|
AI development
|
gptkbp:integration
|
with Google services
|
gptkbp:is_scalable
|
high
|
gptkbp:language_support
|
gptkb:Python
|
gptkbp:market_position
|
leading in AI hardware
|
gptkbp:market_segment
|
enterprise
|
gptkbp:notable_technology
|
custom silicon
|
gptkbp:notable_users
|
gptkb:Google_services
|
gptkbp:part_of
|
Google AI ecosystem
|
gptkbp:performance
|
gptkb:MLPerf
low
teraflops
up to 180 teraflops
|
gptkbp:power_consumption
|
up to 40 watts
|
gptkbp:predecessor
|
gptkb:TPU_v1
|
gptkbp:price
|
variable based on usage
|
gptkbp:primary_use
|
neural network training
|
gptkbp:programming_language
|
gptkb:Tensor_Flow
|
gptkbp:project
|
ongoing
|
gptkbp:ram
|
8 GB HBM
|
gptkbp:release_date
|
gptkb:2017
|
gptkbp:released_in
|
gptkb:2017
|
gptkbp:research_focus
|
gptkb:AI_technology
|
gptkbp:secondary_use_case
|
inference
|
gptkbp:successor
|
gptkb:TPU_v3
|
gptkbp:support
|
large datasets
float32 and bfloat16
|
gptkbp:target_market
|
AI researchers
|
gptkbp:throughput
|
high
|
gptkbp:training_programs
|
gptkb:Tensor_Flow
|
gptkbp:training_speed
|
faster than GPUs
|
gptkbp:used_for
|
gptkb:machine_learning
|
gptkbp:user_base
|
developers and researchers
|
gptkbp:user_feedback
|
positive
|
gptkbp:bfsParent
|
gptkb:Tensor_Flow
|
gptkbp:bfsLayer
|
4
|