Bilingual Evaluation Understudy Score
GPTKB entity
Statements (28)
Predicate | Object |
---|---|
gptkbp:instanceOf |
machine translation evaluation metric
|
gptkbp:abbreviation |
gptkb:BLEU
|
gptkbp:basedOn |
n-gram precision
|
gptkbp:calculationMethod |
comparing candidate translation to reference translations
|
gptkbp:contrastsWith |
human translation quality judgments
|
gptkbp:criticizedFor |
not capturing semantic adequacy
favoring surface similarity not handling synonyms |
gptkbp:domain |
natural language processing
computational linguistics |
gptkbp:hasComponent |
brevity penalty
modified n-gram precision |
https://www.w3.org/2000/01/rdf-schema#label |
Bilingual Evaluation Understudy Score
|
gptkbp:includes |
brevity penalty
|
gptkbp:influenced |
gptkb:ROUGE
gptkb:METEOR gptkb:chrF |
gptkbp:introduced |
gptkb:Kishore_Papineni
|
gptkbp:introducedIn |
2002
|
gptkbp:measures |
automatic evaluation metric
|
gptkbp:publishedIn |
gptkb:ACL_2002
|
gptkbp:relatedTo |
gptkb:ROUGE
gptkb:METEOR gptkb:TER NIST score |
gptkbp:usedFor |
evaluating machine translation quality
|
gptkbp:bfsParent |
gptkb:BLEU_Score
|
gptkbp:bfsLayer |
7
|