create more inclusive algorithms
Gendering algorithms is a project working towards creating more inclusive algorithms
Organizations around the globe employ inferential analytics methods to guess user characteristics and preferences. These global social media practices are often opaque and provide detailed information on citizens worldwide, including sensitive attributes such as race, gender, sexual orientation, and political opinions. These methods support ulterior decision-making processes that significantly affect citizens in various ways, such as the automatic refusal of an online credit application, e-recruiting practices without any human intervention, or misdiagnoses of certain diseases.
A growing global concern is that automated recognition systems may exacerbate and reinforce existing biases that different societies have with respect to gender, age, race, and sexual orientation. Questions around the consequences of automated gender recognition are particularly poorly understood and often underestimated.
Gender stereotyping is a complex process that vastly differs among countries and that, although based on strong beliefs of what gender is and should be, is both used and understood too simplistically. Although two international human rights treaties include explicit obligations relating to harmful and wrongful stereotyping, these provisions were written reflecting the mentality of a time when 'man' and 'women' were the only recognizable genders.
Lack of global guidance
Moreover, the global landscape of AI ethics guidelines does not provide adequate guidance in this respect. Unfortunately, this global governance challenge harnesses technical practices - gender classifier systems also have a binary understanding of gender - and suffers from siloed disciplinary approaches.
Want to know more?
Read our full grant-winning proposal below.