gptkbp:instanceOf
|
large language model
|
gptkbp:architecture
|
gptkb:transformation
|
gptkbp:availableOn
|
gptkb:Amazon_SageMaker
gptkb:Hugging_Face
gptkb:Meta_AI_website
gptkb:Microsoft_Azure
|
gptkbp:citation
|
gptkb:Touvron_et_al.,_2023
|
gptkbp:context
|
4096 tokens
|
gptkbp:developedBy
|
gptkb:Meta_AI
|
gptkbp:hasChatVersion
|
gptkb:Llama_2_70B_Chat
|
gptkbp:hasModel
|
decoder-only
|
gptkbp:hasVariant
|
gptkb:Llama_2_13B
gptkb:Llama_2_34B
gptkb:Llama_2_7B
|
https://www.w3.org/2000/01/rdf-schema#label
|
Llama 2 70B
|
gptkbp:input
|
gptkb:text
|
gptkbp:intendedUse
|
research
commercial applications
|
gptkbp:language
|
English
|
gptkbp:license
|
custom commercial license
|
gptkbp:maxSequenceLength
|
4096 tokens
|
gptkbp:modelCardUrl
|
https://ai.meta.com/resources/models-and-libraries/llama-downloads/
|
gptkbp:notableFor
|
high performance on benchmarks
open weights for research and commercial use
|
gptkbp:openSource
|
false
|
gptkbp:output
|
gptkb:text
|
gptkbp:parameter
|
70 billion
|
gptkbp:partOf
|
gptkb:Llama_2
|
gptkbp:platform
|
multiple high-memory GPUs
|
gptkbp:predecessor
|
gptkb:Llama_1
|
gptkbp:releaseDate
|
2023
|
gptkbp:supportsChat
|
true
|
gptkbp:supportsFineTuning
|
true
|
gptkbp:tokenizer
|
gptkb:SentencePiece
|
gptkbp:trainer
|
publicly available data
reinforcement learning from human feedback
supervised fine-tuning
|
gptkbp:bfsParent
|
gptkb:Mixtral_8x7B
gptkb:Mixtral
|
gptkbp:bfsLayer
|
6
|