Alternative names (6)
fineTunable • fineTunedBy • fineTunedFor • fineTunedFrom • fineTunedOn • fineTuningMethodRandom triples
| Subject | Object |
|---|---|
| gptkb:MPT_(Mosaic_Pretrained_Transformer) | instruction following |
| gptkb:OpenAssistant/pythia-12b-sft-v8-128s-steps | supervised fine-tuning (SFT) |
| gptkb:Llama_2 | chat |
| gptkb:Llama_3_70B_Code | software development tasks |
| gptkb:T5-XXL_language_model | true |
| gptkb:OpenAssistant/pythia-12b-sft-v8-8q-steps | supervised fine-tuning (SFT) |
| gptkb:Qwen-1.8B-Chat | gptkb:public_speaker |
| gptkb:OpenAssistant/pythia-12b-sft-v8-2q-steps | supervised fine-tuning |
| gptkb:Nous_Hermes_2_Yi | gptkb:Hermes_2_dataset |
| gptkb:OpenAssistant/pythia-12b-sft-v8-16q-steps | gptkb:OpenAssistant |
| gptkb:Llama_2_70B_Instruct | conversational tasks |
| gptkb:Qwen-1.8B | yes |
| gptkb:OpenAssistant/pythia-12b-sft-v8-1m-steps | supervised fine-tuning |
| gptkb:OpenAssistant/pythia-12b-sft-v8-128k-steps | OpenAssistant dataset |
| gptkb:OpenHermes | OpenAssistant dataset |
| gptkb:Yi-7B | true |
| gptkb:DeBERTa-Large | true |
| gptkb:OpenAssistant/pythia-12b-sft-v8-64m-steps | 64 million steps |
| gptkb:Meta_Llama_2 | true |
| gptkb:Llama-2-Chat | gptkb:public_speaker |