gptkbp:instanceOf
|
inf1.24xlarge
inf1.2xlarge
inf1.6xlarge
inf1.xlarge
Amazon EC2 instance type
|
gptkbp:availableOn
|
multiple AWS regions
|
gptkbp:designedFor
|
machine learning inference
|
gptkbp:documentation
|
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/inf1-instances.html
|
gptkbp:hasNetworkPerformance
|
up to 100 Gbps
|
gptkbp:hasPricingInfo
|
https://aws.amazon.com/ec2/instance-types/inf1/
|
gptkbp:hasProcessorCount
|
up to 16 AWS Inferentia chips
|
gptkbp:hasvCPUCount
|
up to 96 vCPUs
|
https://www.w3.org/2000/01/rdf-schema#label
|
AWS EC2 Inf1 instances
|
gptkbp:launched
|
2019
|
gptkbp:offeredBy
|
gptkb:Amazon_Web_Services
|
gptkbp:operatingSystem
|
gptkb:Ubuntu
gptkb:Linux
gptkb:Amazon_Linux
|
gptkbp:platform
|
gptkb:TensorFlow
gptkb:MXNet
gptkb:PyTorch
gptkb:ONNX
|
gptkbp:RAM
|
up to 192 GiB RAM
|
gptkbp:storage
|
NVMe SSD
|
gptkbp:supportedBy
|
gptkb:AWS_Inferentia
|
gptkbp:supports
|
gptkb:AWS_Batch
gptkb:AWS_CloudFormation
gptkb:Auto_Scaling
gptkb:AWS_Identity_and_Access_Management
gptkb:Amazon_CloudWatch
gptkb:Elastic_Load_Balancing
gptkb:VPC
gptkb:EC2_Reserved_Instances
gptkb:Elastic_Inference
gptkb:EC2_Spot_Instances
EBS volumes
EC2 On-Demand Instances
AWS Deep Learning AMIs
AWS Lambda (via endpoints)
|
gptkbp:uses
|
natural language processing
speech recognition
image recognition
recommendation engines
|
gptkbp:bfsParent
|
gptkb:AWS_Neuron
|
gptkbp:bfsLayer
|
7
|