Statements (44)
Predicate | Object |
---|---|
gptkbp:instanceOf |
gptkb:dataset
|
gptkbp:annotatedBy |
crowdsourcing
|
gptkbp:answerType |
no answer (SQuAD2.0)
span from context |
gptkbp:assesses |
F1 score
Exact Match |
gptkbp:benchmarkStatus |
widely used
|
gptkbp:citation |
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang. SQuAD: 100,000+ Questions for Machine Comprehension of Text. EMNLP 2016.
Pranav Rajpurkar, Robin Jia, Percy Liang. Know What You Don’t Know: Unanswerable Questions for SQuAD. ACL 2018. |
gptkbp:contains |
questions
paragraphs answers |
gptkbp:createdBy |
gptkb:Stanford_University
|
gptkbp:dataSplit |
gptkb:train
examination dev |
gptkbp:domain |
machine reading comprehension
|
gptkbp:format |
gptkb:JSON
|
gptkbp:fullName |
gptkb:Stanford_Question_Answering_Dataset
|
gptkbp:hasVersion |
gptkb:SQuAD1.1
gptkb:SQuAD2.0 |
https://www.w3.org/2000/01/rdf-schema#label |
SQuAD dataset
|
gptkbp:impact |
advanced research in reading comprehension
standardized evaluation for QA models |
gptkbp:language |
English
|
gptkbp:license |
CC BY-SA 4.0
|
gptkbp:memiliki_tugas |
question answering
|
gptkbp:officialWebsite |
https://rajpurkar.github.io/SQuAD-explorer/
|
gptkbp:releaseYear |
2016
|
gptkbp:source |
Wikipedia articles
|
gptkbp:SQuAD1.1_size |
100,000+ question-answer pairs
|
gptkbp:SQuAD2.0_feature |
unanswerable questions
|
gptkbp:SQuAD2.0_size |
150,000+ question-answer pairs
|
gptkbp:type |
factoid
non-factoid |
gptkbp:usedBy |
gptkb:BERT
gptkb:ALBERT gptkb:DistilBERT gptkb:RoBERTa gptkb:XLNet many other NLP models |
gptkbp:usedFor |
benchmarking NLP models
|
gptkbp:bfsParent |
gptkb:Stanford_NLP
|
gptkbp:bfsLayer |
6
|