|
gptkbp:instanceOf
|
gptkb:large_language_model
|
|
gptkbp:architecture
|
gptkb:Mixture_of_Experts
|
|
gptkbp:bench
|
gptkb:GSM8K
gptkb:HumanEval
gptkb:MMLU
|
|
gptkbp:context
|
32K tokens
|
|
gptkbp:developedBy
|
gptkb:Mistral_AI
|
|
gptkbp:github
|
https://github.com/mistralai/mixtral-8x7b
|
|
gptkbp:hasModel
|
gptkb:transformation
|
|
gptkbp:improves
|
gptkb:Llama_2_70B
|
|
gptkbp:language
|
gptkb:French
English
|
|
gptkbp:license
|
Apache 2.0
|
|
gptkbp:notableFeature
|
Sparse MoE routing
|
|
gptkbp:notableFor
|
gptkb:research
content creation
chatbots
assistants
|
|
gptkbp:notableRelease
|
gptkb:Mixtral_8x7B
|
|
gptkbp:openSource
|
true
|
|
gptkbp:parameter
|
46B
|
|
gptkbp:releaseDate
|
2023
|
|
gptkbp:supports
|
gptkb:translator
code generation
summarization
text generation
reasoning tasks
|
|
gptkbp:tokenizer
|
gptkb:SentencePiece
|
|
gptkbp:trainer
|
gptkb:law
gptkb:Wikipedia
books
web text
|
|
gptkbp:bfsParent
|
gptkb:Transformer_models
gptkb:Hugging_Face_Hub
gptkb:Hugging_Face_models
gptkb:Language_modeling
|
|
gptkbp:bfsLayer
|
7
|
|
https://www.w3.org/2000/01/rdf-schema#label
|
Mixtral
|