Statements (22)
| Predicate | Object |
|---|---|
| gptkbp:instanceOf |
gptkb:academic
|
| gptkbp:analyzes |
gptkb:Face++_facial_recognition
gptkb:IBM_facial_recognition gptkb:Microsoft_facial_recognition commercial gender classification systems |
| gptkbp:citation |
AI ethics research
|
| gptkbp:conductedBy |
gptkb:Timnit_Gebru
gptkb:Joy_Buolamwini |
| gptkbp:focusesOn |
algorithmic bias
facial analysis algorithms |
| gptkbp:foundIn |
higher error rates for darker-skinned females
disparities in accuracy across gender and skin type |
| gptkbp:influenced |
corporate policy changes
public debate on AI ethics |
| gptkbp:location |
gptkb:MIT_Media_Lab
|
| gptkbp:method |
intersectional analysis of gender and skin type
|
| gptkbp:publicationYear |
2018
|
| gptkbp:publishedIn |
gptkb:Proceedings_of_Machine_Learning_Research
|
| gptkbp:trainer |
gptkb:Pilot_Parliaments_Benchmark
|
| gptkbp:bfsParent |
gptkb:Joy_Buolamwini
|
| gptkbp:bfsLayer |
6
|
| https://www.w3.org/2000/01/rdf-schema#label |
Gender Shades study
|