BLEU evaluation metric

GPTKB entity

Statements (24)
Predicate Object
gptkbp:instanceOf evaluation metric
machine translation metric
gptkbp:alternativeMetric gptkb:ROUGE
gptkb:METEOR
gptkb:chrF
BERTScore
gptkbp:commonIn natural language processing
machine translation research
gptkbp:contrastsWith machine translation output
reference translations
gptkbp:criticizedFor not capturing semantic adequacy
favoring surface similarity
gptkbp:fullName gptkb:Bilingual_Evaluation_Understudy
gptkbp:higherScoreMeans better translation quality
https://www.w3.org/2000/01/rdf-schema#label BLEU evaluation metric
gptkbp:includes brevity penalty
gptkbp:introduced gptkb:Kishore_Papineni
gptkbp:introducedIn 2002
gptkbp:measures n-gram precision
gptkbp:output score between 0 and 1
gptkbp:publishedIn gptkb:ACL_2002
gptkbp:usedFor evaluating machine translation quality
gptkbp:bfsParent gptkb:ROUGE_evaluation_metric
gptkbp:bfsLayer 7