HellaSwag: Can a Machine Really Finish Your Sentence?
GPTKB entity
Statements (30)
| Predicate | Object |
|---|---|
| gptkbp:instanceOf |
gptkb:performance
gptkb:dataset |
| gptkbp:citation |
1000+
|
| gptkbp:contains |
context sentence
four possible endings |
| gptkbp:creator |
gptkb:Yejin_Choi
gptkb:Ali_Farhadi gptkb:Rowan_Zellers Ari Holtzman Yonatan Bisk |
| gptkbp:difficulty |
harder than SWAG
|
| gptkbp:focus |
commonsense story completion
|
| gptkbp:format |
multiple choice
|
| gptkbp:goal |
choose the most plausible ending
|
| gptkbp:language |
English
|
| gptkbp:license |
gptkb:MIT_License
|
| gptkbp:memiliki_tugas |
commonsense reasoning
sentence completion |
| gptkbp:notableFor |
adversarial filtering
challenging for language models |
| gptkbp:numberOfRooms |
49,000
|
| gptkbp:publicationYear |
2019
|
| gptkbp:publishedIn |
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019)
|
| gptkbp:relatedTo |
SWAG dataset
|
| gptkbp:url |
https://rowanzellers.com/hellaswag/
https://arxiv.org/abs/1905.07830 |
| gptkbp:usedFor |
evaluating natural language inference models
|
| gptkbp:bfsParent |
gptkb:HellaSwag
|
| gptkbp:bfsLayer |
8
|
| https://www.w3.org/2000/01/rdf-schema#label |
HellaSwag: Can a Machine Really Finish Your Sentence?
|