research

In this section, we compile part of the research that we have conducted under the topic of 'Gendering Algorithms'

Fosch-Villaronga, E., Drukarch, H, Khanna, P., Verhoef, T., & Custers, B. (2022). Accounting for diversity in AI for medicine. Computer Law and Security Review, 47, 105735.

In healthcare, gender and sex considerations are crucial because they affect individuals' health and disease differences. Yet, most algorithms deployed in the healthcare context do not consider these aspects and do not account for bias detection. Missing these dimensions in algorithms used in medicine is a huge point of concern, as neglecting these aspects will inevitably produce far from optimal results and generate errors that may lead to misdiagnosis and potential discrimination. This paper explores how current algorithmic-based systems may reinforce gender biases and affect marginalized communities in healthcare-related applications. To do so, we bring together notions and reflections from computer science, queer media studies, and legal insights to better understand the magnitude of failing to consider gender and sex difference in the use of algorithms for medical purposes. Our goal is to illustrate the potential impact that algorithmic bias may have on inadvertent discriminatory, safety, and privacy-related concerns for patients in increasingly automated medicine. This is necessary because by rushing the deployment of AI technologies that do not account for diversity, we risk having an even more unsafe and inadequate healthcare delivery. By promoting the account for privacy, safety, diversity, and inclusion in algorithmic developments with health-related outcomes, we ultimately aim to inform the Artificial Intelligence (AI) global governance landscape and practice on the importance of integrating gender and sex considerations in the development of algorithms to avoid exacerbating existing or new prejudices.


Fosch-Villaronga, E. & Poulsen, A. (2022). Diversity and Inclusion in Artificial Intelligence. In: Custers, B., Fosch-Villaronga, E. (eds) Law and Artificial Intelligence. Regulating AI and Applying AI in Legal Practice. Information Technology and Law Series, vol 35. T.M.C. Asser Press, The Hague, 109–134.

Discrimination and bias are inherent problems of many AI applications, as seen in, for instance, face recognition systems not recognizing dark-skinned women and content moderator tools silencing drag queens online. These outcomes may derive from limited datasets that do not fully represent society as a whole or from the AI scientific community's western-male configuration bias. Although being a pressing issue, understanding how AI systems can replicate and amplify inequalities and injustice among underrepresented communities is still in its infancy in social science and technical communities. This chapter contributes to filling this gap by exploring the research question: what do diversity and inclusion mean in the context of AI? This chapter reviews the literature on diversity and inclusion in AI to unearth the underpinnings of the topic and identify key concepts, research gaps, and evidence sources to inform practice and policymaking in this area. Here, attention is directed to three different levels of the AI development process: the technical, the community, and the target user level. The latter is expanded upon, providing concrete examples of usually overlooked communities in the development of AI, such as women, the LGBTQ+ community, senior citizens, and disabled persons. Sex and gender diversity considerations emerge as the most at risk in AI applications and practices and thus are the focus here. To help mitigate the risks that missing sex and gender considerations in AI could pose for society, this chapter closes with proposing gendering algorithms, more diverse design teams, and more inclusive and explicit guiding policies. Overall, this chapter argues that by integrating diversity and inclusion considerations, AI systems can be created to be more attuned to all-inclusive societal needs, respect fundamental rights, and represent contemporary values in modern societies.

While robots in medical care are becoming increasingly prevalent, direct interaction with users raises new ethical and social issues that have an impact on the law and regulatory initiatives. One of those concerns, still underexplored, is how to make these robots fit for users that come in different shapes, sizes, and genders. Although mentioned in the literature, these concerns have not yet been reflected in industry standards. For instance, ISO 13482:2014 on safety requirements for personal care robots briefly acknowledges that future editions might include more information about different kinds of people. More than seven years following its approval and after undergoing revision, those requirements are nonetheless still missing. Based on the tests with robotic exoskeletons as part of the H2020 EUROBENCH FSTP PROPELLING, we argue that being oblivious to differences in gender and medical conditions or following a one-size-fits-all approach hides important distinctions and increases the exclusion of specific users. Our observations show that robotic exoskeletons operate intimately with users' bodies, thus exemplifying how gender and medical conditions might introduce dissimilarities in human-robot interaction that, as long as they remain ignored in regulations, may compromise user safety. We conclude the article by putting forward particular recommendations to update ISO 13482:2014 to reflect better the broad diversity of users of personal care robots.

Here below you can find previous research conducted on the topic by some of the researchers involved in gendering algorithms

Poulsen, A., Fosch-Villaronga, E., & Søraa, R.A. (2020) Queering Machines. Nature Machine Intelligence, Correspondence, 1-1.

s42256-020-0157-6.pdf


Gender inferences in social media

Fosch-Villaronga, E., Poulsen, A., Søraa, R. A., & Custers, B. H. M. (2020) A little bird told me your gender: Gender inferences in social media. Information Processing and Management, 58(3), 102541.

Online and social media platforms employ automated recognition methods to presume user preferences, sensitive attributes such as race, gender, sexual orientation, and opinions. These opaque methods can predict behaviors for marketing purposes and influence behavior for profit, serving attention economics but also reinforcing existing biases such as gender stereotyping. Although two international human rights treaties include explicit obligations relating to harmful and wrongful stereotyping, these stereotypes persist online and offline. By identifying how inferential analytics may reinforce gender stereotyping and affect marginalized communities, opportunities for addressing these concerns and thereby increasing privacy, diversity, and inclusion online can be explored. This is important because misgendering reinforces gender stereotypes, accentuates gender binarism, undermines privacy and autonomy, and may cause feelings of rejection, impacting people's self-esteem, confidence, and authenticity. In turn, this may increase social stigmatization. This study brings into view concerns of discrimination and exacerbation of existing biases that online platforms continue to replicate and that literature starts to highlight. The implications of misgendering on Twitter are investigated to illustrate the impact of algorithmic bias on inadvertent privacy violations and reinforcement of social prejudices of gender through a multidisciplinary perspective, including legal, computer science, and critical feminist media-studies viewpoints. An online pilot survey was conducted to better understand how accurate Twitter's gender inferences of its users’ gender identities are. This served as a basis for exploring the implications of this social media practice.

1-s2.0-S0306457321000480-main (3).pdf

Fair medicine & AI Conference

Diversity for AI in medicine