Cohen's kappa

GPTKB entity

Statements (53)
Predicate Object
gptkbp:instance_of gptkb:physicist
gptkbp:applies_to gptkb:software_framework
clinical assessments
healthcare settings
binary classifications
nominal classifications
ordinal classifications
gptkbp:developed_by gptkb:Jacob_Cohen
gptkbp:enhances data quality
https://www.w3.org/2000/01/rdf-schema#label Cohen's kappa
gptkbp:is_a_framework_for psychometrics
gptkbp:is_affected_by prevalence of categories
gptkbp:is_analyzed_in confusion matrix
diagnostic tests
classification models
the level of agreement
gptkbp:is_cited_in academic literature
gptkbp:is_compared_to two raters' classifications
gptkbp:is_critical_for statistical reporting
gptkbp:is_described_as the proportion of agreement
gptkbp:is_essential_for reproducibility in research
validating research findings
gptkbp:is_evaluated_by surveys
(Po -Pe) / (1 -Pe)
evaluating consistency
gptkbp:is_fundamental_to data analysis
gptkbp:is_influenced_by sample size
gptkbp:is_often_used_in clinical trials
Cohen's d
gptkbp:is_similar_to Fleiss' kappa
gptkbp:is_utilized_in quality control
gptkbp:key statistical analysis
gptkbp:measures inter-rater reliability
meta-analysis
categorical agreement
statistical agreement
the reliability of ratings
gptkbp:notable_for statistical tool
gptkbp:performed_by other reliability measures
gptkbp:range -1 to 1
gptkbp:reports_to research studies
gptkbp:represents expected agreement by chance
observed agreement
gptkbp:sensor the number of categories
gptkbp:suitable_for multi-rater scenarios
gptkbp:type_of agreement coefficient
gptkbp:used_in gptkb:hospital
gptkb:psychologist
social sciences
gptkbp:values no agreement beyond chance
perfect agreement
gptkbp:bfsParent gptkb:Matthew's_Correlation_Coefficient
gptkbp:bfsLayer 6