Bilingual Evaluation Understudy
GPTKB entity
Statements (24)
| Predicate | Object |
|---|---|
| gptkbp:instanceOf |
gptkb:machine_translation_evaluation_metric
|
| gptkbp:abbreviation |
gptkb:BLEU
|
| gptkbp:basedOn |
n-gram precision
|
| gptkbp:field |
natural language processing
computational linguistics |
| gptkbp:influenced |
gptkb:ROUGE
gptkb:METEOR gptkb:TER ChrF |
| gptkbp:introduced |
gptkb:Kishore_Papineni
|
| gptkbp:introducedIn |
2002
|
| gptkbp:language |
English
|
| gptkbp:limitation |
does not account for synonyms
sensitive to exact word matches |
| gptkbp:measures |
similarity between machine translation and reference translation
|
| gptkbp:penalty |
short translations
|
| gptkbp:publishedIn |
gptkb:ACL_2002
|
| gptkbp:type |
gptkb:automatic_evaluation_metric
|
| gptkbp:usedFor |
evaluating machine translation quality
|
| gptkbp:uses |
brevity penalty
|
| gptkbp:bfsParent |
gptkb:BLEU_metric
gptkb:BLEU |
| gptkbp:bfsLayer |
7
|
| https://www.w3.org/2000/01/rdf-schema#label |
Bilingual Evaluation Understudy
|