A growing global concern is that automated recognition systems may exacerbate and reinforce existing biases that different societies have with respect to gender, age, race, and sexual orientation. Questions around the consequences of automated gender recognition are particularly poorly understood and often underestimated.
Gender stereotyping is a complex process that vastly differs among countries and that, although based on strong beliefs of what gender is and should be, is both used and understood too simplistically. Although two international human rights treaties include explicit obligations relating to harmful and wrongful stereotyping, these provisions were written reflecting the mentality of a time when 'man' and 'women' were the only recognizable genders.
Lack of global guidance
Moreover, the global landscape of AI ethics guidelines does not provide adequate guidance in this respect. Unfortunately, this global governance challenge harnesses technical practices - gender classifier systems also have a binary understanding of gender - and suffers from siloed disciplinary approaches.