gptkbp:instanceOf
|
evaluation metric
|
gptkbp:commonIn
|
natural language processing
text summarization research
NLP competitions
|
gptkbp:contrastsWith
|
gptkb:BLEU_evaluation_metric
|
gptkbp:fullName
|
gptkb:Recall-Oriented_Understudy_for_Gisting_Evaluation
|
https://www.w3.org/2000/01/rdf-schema#label
|
ROUGE evaluation metric
|
gptkbp:includesMetric
|
gptkb:ROUGE-L
gptkb:ROUGE-N
gptkb:ROUGE-S
gptkb:ROUGE-SU
gptkb:ROUGE-W
|
gptkbp:introduced
|
gptkb:Chin-Yew_Lin
|
gptkbp:introducedIn
|
2004
|
gptkbp:measures
|
word sequences
overlap of n-grams
word pairs
|
gptkbp:openSource
|
gptkb:py-rouge
gptkb:rouge-score_(Python)
|
gptkbp:ROUGE-L
|
measures longest common subsequence
|
gptkbp:ROUGE-N
|
measures n-gram overlap
|
gptkbp:ROUGE-S
|
measures skip-bigram overlap
|
gptkbp:ROUGE-SU
|
measures skip-bigram plus unigram overlap
|
gptkbp:ROUGE-W
|
measures weighted longest common subsequence
|
gptkbp:usedFor
|
machine translation evaluation
automatic evaluation of machine-generated text
summarization evaluation
|
gptkbp:bfsParent
|
gptkb:Chin-Yew_Lin
gptkb:Eduard_Hovy
|
gptkbp:bfsLayer
|
6
|