gptkbp:instanceOf
|
machine translation evaluation metric
|
gptkbp:basedOn
|
n-gram precision
|
gptkbp:citation
|
over 10,000
|
gptkbp:commonIn
|
natural language processing
machine translation research
|
gptkbp:contrastsWith
|
gptkb:ROUGE
gptkb:METEOR
gptkb:TER
|
gptkbp:criticizedFor
|
not capturing semantic adequacy
favoring surface similarity
|
gptkbp:fullName
|
gptkb:Bilingual_Evaluation_Understudy
|
gptkbp:higherScoreMeans
|
better translation quality
|
https://www.w3.org/2000/01/rdf-schema#label
|
BLEU
|
gptkbp:includes
|
brevity penalty
|
gptkbp:input
|
machine translation output
reference translations
|
gptkbp:introduced
|
gptkb:Kishore_Papineni
|
gptkbp:introducedIn
|
2002
|
gptkbp:notRecommendedFor
|
sentence-level evaluation
|
gptkbp:openSource
|
gptkb:Moses
gptkb:NLTK
gptkb:SacreBLEU
|
gptkbp:output
|
score between 0 and 1
|
gptkbp:publishedIn
|
gptkb:ACL_2002
|
gptkbp:relatedTo
|
automatic evaluation
corpus-level evaluation
|
gptkbp:usedFor
|
evaluating machine translation quality
|
gptkbp:uses
|
geometric mean
brevity penalty
modified n-gram precision
|
gptkbp:bfsParent
|
gptkb:ROUGE
gptkb:Text_Summarization
gptkb:Data-to-Text_Generation
gptkb:WebNLG
|
gptkbp:bfsLayer
|
6
|