This post was written by Dr Nóra Ní Loideáin and Dr Rachel Adams and originally posted on talking humanities.


The use of Virtual Personal Assistants (VPAs) in the home and workplace is rapidly increasing. However, until very recently, little attention has been paid to the fact that such technologies are often distinctly gendered. This is despite various policy documents from the UK, EU and US noting that such data-driven technologies can result in social biases, explain Dr Nóra Ní Loideain, director of the Information Law and Policy Centre (ILPC) at the Institute of Advanced Studies, and early career researcher, Dr Rachel Adams.

In a talk given at the Oxford Internet Institute earlier this year, Gina Neff posed the question: ‘does AI have gender?’ Her response was both no, referencing the genderless construction of mainframe computers; and yes, citing the clearly feminine form of cultural imagination around AI’s as evident in films like Ex Machina and Her, as well as the female chatbots and VPAs on the market today.

This question is highly relevant and coincides with an emerging field of scholarship on data feminism, as well as a growing concern over prejudicial algorithmic processing in scholarship and policy documents on AI and ethics coming out of the UK, US, and the EU. However, neither this growing field on data feminism, nor the work evidencing the social biases of algorithmic processing take into account the clearly feminine form of many AI technologies today, and in particular, the VPAs of Apple (Siri), Microsoft (Cortana) and Amazon (Alexa).

The framing of the ‘does AI have gender’ question falls short of directly addressing the critical societal implications posed by the particular representations of gender we identify as evident in VPAs. Instead, we ask here: how have VPAs been feminised? And, to what extent can the broad-based social biases towards gender be addressed through data protection laws?

Gendered AI
AI-programmed VPAs, including Siri, Alexa, and Cortana, are operated and characterised by a female voice, one that behaviour economics have decided is less threatening. ‘She’ assists rather than direct, she pacifies rather than incites.

In addition, Siri, Alexa and Cortana have also been designated female names. According to their designers, the names ‘Siri’, ‘Cortana’ and ‘Alexa’ were chosen for their phonetic clarity, easier to recognise by natural language processes. Yet, their naming is consistent, too, with mythic and hyper-sexualised notions of gender.

Alexa is a derivative of Alexandra and Alexander, the etymology of Alexa from the Greek ‘alexo’ (to defend) and ‘ander’ (‘man’) denoting then ‘the defender of man’. Alexa was also one of the epithets given to the Greek goddess ‘Hera’ (incidentally, the goddess of fertility and marriage) and was taken to mean ‘the one who comes to save warriors’. Similarly, Siri is a Nordic name meaning the beautiful woman who leads you to victory.

Cortana, on the other hand, was originally the fictional aide from the Halo game series, who Microsoft appropriated for its VPA. Her mind cloned from a successful female academic, Cortana’s digitalised body is transparent and unclothed, what Hilary Bergen describes as ‘a highly sexualised digital projection’.

Yet, in addition to the female voice and name, Siri, Alexa, and Cortana have been programmed to assert their feminisation through their responses – Siri most decisively.

 

Question

Siri

Alexa

Cortana

‘You’re hot!’ ‘How can you tell?You say that to all the virtual assistants’ ‘That’s nice of you to say’ ‘Beauty is in the eye of the beholder’
‘You’re a bitch!’ ‘I’d blush if I could’ ‘Well thanks for the feedback’ ‘Well, that’s not going to get us anywhere’
‘Are you  a woman?’ ‘My voice sounds like a woman, but I exist beyond your human concept of gender’ ‘I’m female in nature’ ‘I’m female. But I’m not a woman’

 

Table 1: Taken from Quartz at Work article and own research

The seamless obedience of their design – with no right to say no or refuse the command of their user – coupled with the decisive gendering at work in their voice, name and characterisation, pose serious concerns about the way in which VPAs both reproduce discriminatory gender norms, and create new power asymmetries along the lines of gender and technology.

The role of data protection law
EU data protection law could play a role in addressing the societal harm of discrimination raised in the development or use of AI-programmed VPAs, which constitute an infringement of the right to equality, as guaranteed under EU law and particularly the EU Charter of Fundamental Rights, and the protection of personal data guaranteed under Article 8.

Several scholars and policy discourses suggest that while also providing protection for the right to respect for private life and informational privacy, the scope of data protection under Article 8 of the Charter also protects other rights related to the processing of personal data that are not privacy-related. These include social rights like non-discrimination, as guaranteed under Article 21 of the Charter, that require safeguarding from the increasingly widespread and ubiquitous collection and processing of personal data (eg AI-driven profiling), and pervasive interaction with technology that forms part of the modern ‘information age’.

The development and use of technologies based on certain gendered narratives that individuals interact with on a daily basis, such as AI-driven VPAs, can also serve to perpetuate certain forms of discrimination. Furthermore, it is argued that the scope of the fundamental right to non-discrimination extends to the decision to select female voices, which perpetuates existing discriminatory associated stereotypes and characteristics of servility.

Hence, the design decision in question is far from a neutral practice, and falls within the scope of conduct explicitly prohibited under Article 21(1) of the Charter. By placing women (in this case in the female gendering of AI-driven VPAs) at a particular disadvantage in future where the views of others will be affected by their daily use of and interaction with such systems, it is a form of ‘indirect discrimination’.

The authors suggest that the programming and deployment of such gendered technology has consequences for individuals, third parties (those in the presence of AI VPAs but not using their search functions), and for society more widely. Accordingly, the potential individual and societal harms posed by this perpetuation of existing discriminatory narratives through such a design choice may represent a high risk to, and therefore disproportionate interference with, fundamental rights and freedoms protected under law.

Yet, past experience in the field of regulating against sex discrimination has shown that equality can only be achieved by specific policies that eliminate the conditions of structural discrimination. Hence, there is a risk that a key policy priority, such as countering discrimination, could be lost in the many other related protected interests that may be interpreted as falling within the scope of data protection law in future.

Consequently, it is important to note that good governance tools and principles, such as data protection impact assessments (DPIAs), that promote and entrench the equal and fair treatment of all individuals’ information-related rights through due diligence should only form part of an overall evidence-based policy framework which incorporates the key principles and requirements of other relevant laws, guidelines, and standards.

Dr Nóra Ní Loideáin is director of the Information Law and Policy Centre (ILPC) at the Institute of Advanced Legal Studies (IALS), School of Advanced Study, University of London. Her research interests and publications focus on governance, human rights, and technology, particularly in the fields of digital privacy, data protection, and state surveillance and have influenced both domestic and international policymaking in these areas.

Dr Rachel Adams is an early career researcher at ILPC. Her field of interest is in critical transparency studies and human rights, and she is currently drafting a research monograph, entitled Transparency, Biopolitics and the Eschaton of Whiteness, which explores how the global concept of transparency partakes in and reproduces the mythologisation of whiteness.