gptkbp:instanceOf
|
evaluation metric
|
gptkbp:alternativeMetric
|
gptkb:ROUGE
gptkb:METEOR
gptkb:chrF
BERTScore
|
gptkbp:calculationMethod
|
brevity penalty for short translations
geometric mean of n-gram precisions
|
gptkbp:commonIn
|
natural language processing
|
gptkbp:contrastsWith
|
machine translation output
reference translations
|
gptkbp:criticizedFor
|
favoring short translations
not capturing semantic adequacy
|
gptkbp:fullName
|
gptkb:Bilingual_Evaluation_Understudy
|
gptkbp:higherScoreMeans
|
better translation quality
|
https://www.w3.org/2000/01/rdf-schema#label
|
BLEU metric
|
gptkbp:introduced
|
gptkb:Kishore_Papineni
|
gptkbp:introducedIn
|
2002
|
gptkbp:lowerScoreMeans
|
worse translation quality
|
gptkbp:measures
|
n-gram precision
|
gptkbp:nGramOrder
|
typically up to 4-grams
|
gptkbp:openSource
|
gptkb:Moses
gptkb:NLTK
gptkb:SacreBLEU
|
gptkbp:output
|
score between 0 and 1
|
gptkbp:publishedIn
|
gptkb:ACL_2002
|
gptkbp:usedFor
|
machine translation evaluation
|
gptkbp:uses
|
brevity penalty
|
gptkbp:bfsParent
|
gptkb:Chris_Callison-Burch
|
gptkbp:bfsLayer
|
6
|