RightsCon is the world’s leading summit on human rights in the digital age. RightsCon offers a platform for thousands of participants around the world to convene, connect, and contribute to a shared agenda for the future. It enables business leaders, activists, technologists, policymakers, journalists, philanthropists, researchers, and artists from around the world to interact and explore opportunities to advance human rights in the digital age. The 11th RightsCon Summit in 2022 took place from 6-10 June.

Jeni Tennison attended the summit and has provided some reflections from the following sessions.

a) Decolonizing co-design: Global South perspectives

This session looked at the concept of design thinking and co-design; how it had arisen in Scandinavia, as a mechanism by which workers / employees could become involved in design in an industrial setting; and how it has and is being adapted for use within the social sector, and outside the Global North.

“I was interested in it because co-design as a method lives somewhere near the top of Arnstein’s Ladder of Citizen Participation and the IAP2 Spectrum of Public Participation. It should be a methodology that helps organisations and communities to design data governance processes together, for example. At the same time, we know a lot of those most affected by data governance decisions are going to be minoritised in one way or another, so decolonising the process – making it as approachable as possible by the range of participants we want to be included and specifically challenging Global North assumptions – is going to be important.”

A lot of the discussion centred around the panellists’ experiences facilitating co-design sessions. She mentions some practical tips that struck her:

  • always having two facilitators, at least one of which is from the community that you’re co-designing with.
  • using an ice-breaker that involves people sharing happy or loving memories about common experiences (eg favourite foods) with strangers, to focus on positive feelings of common humanity.
  • not using the word “solutions” because it carries what can be a crushing expectation of finality, but also because co-design should be more focused on exploring the problem space than finding solutions.
  • viewing facilitators as servants to the participants, rather than as their guides.

The organisers of the panel, Innovation for Change, also shared a new “Spellbook” for co-design, InnoMojo, which looks useful for co-design efforts around data governance.

b) The state of personal data protection in Africa: a comparative approach

This was an interactive session focused on people in Africa sharing their experience and perspectives of personal data protection laws across the continent. One way to track this is to look at which countries have ratified the Africa Union’s Malabo Convention on Cyber Security and Personal Data Protection.

“I went along to understand better the current state of data protection law across Africa, and to see whether there were       any approaches that incorporated the more collective and participatory approaches to data governance that we’re advocating for.”

Most of the session focused on familiar challenges such as:

  • lack of ratification of the convention (no law means no rights)
  • if there is a law, lack of citizen awareness of those digital and data rights
  • lack of effective enforcement, due to weak or missing regulators

One panellist, speaking about the experience in Ghana, talked about how data is abstract, and the concept of “privacy” isn’t something that’s familiar to their way of thinking. One of the participants described how even the origin and framing of “human rights” is shaped by American and European thinking on what rights look like. Unfortunately, the session ended before this could be explored in more detail.

c) Driving corporate action towards responsible and ethical artificial intelligence

This session was focused on the World Benchmarking Alliance’s Collective Impact Coalition for Digital Inclusion and insights from their Digital Inclusion Benchmark 2021. The World Benchmarking Alliance is all about improving corporate behaviours towards the Sustainable Development Goals, and the Digital Inclusion Benchmark looked specifically at corporate commitment and action around digital inclusion.

“I went to this session to better understand how to drive corporate behaviour specifically towards collective and participatory data governance, as this is an important (I think necessary) approach for producing more responsible and ethical AI.”

The headline figures from that report are that only 20 of the 150 companies they looked at have a commitment to ethical AI principles; even those that do commit to those principles don’t explicitly reference human rights; and only fifteen have processes in place to assess human rights risks posed by AI. Most of the conversation focused on getting companies to commit to a set of AI principles as a first step towards more responsible and ethical approaches overall.

It was particularly interesting having some investors in the panel, as they discussed their need for visibility on the risks and liabilities surrounding the human rights implications of AI, up and down the value chain.

One of the investor panellists did highlight the importance of stakeholder engagement as part of AI development processes. The report says:

**3.2.3 Engaging with affected and potentially affected stakeholders (CSI 6) **

Engaging with affected and potentially affected stakeholders is a critical part of a company’s approach to respecting human rights. This indicator looks at two criteria: a) The company discloses the categories of stakeholders whose human rights have been or may be affected by its activities; and b) the company provides at least two examples of its engagement with stakeholders (or their legitimate representatives or multi-stakeholder initiatives) whose human rights have been or may be affected by its activities in the last two years.

Only five companies (Acer, Amazon, Apple, Microsoft, and NEC) met both criteria, while 117 met neither. Apple is particularly notable in this regard, having conducted interviews with 57,000 supply chain workers in 2020. Apple also solicited feedback from almost 200,000 workers in 135 supply facilities in China, India, Ireland, UK, U.S., and Vietnam resulting in over 3,000 actions to address the workers’ concerns. Additionally, the company is investigating the use of new digital labour rights tools featuring data analytics to increase engagement with stakeholders.

When asked about good practices, however, the panellists talked about having few good examples to point to and a lack of clear good practices. Apparently, there were five companies within the 150 that had an AI oversight board, but these tended to be technocratic exercises built around technical expertise (in law, ethics, and human rights) rather than being made up of or incorporating lay members from affected communities.

This piece has been reposted from Connected by Data , with permission and thanks. 

Dr Jeni Tennison is an expert in all things data, from technology, to governance, strategy, and public policy. She is the founder of Connected by data, a Shuttleworth Foundation Fellow and an Affiliated Researcher at the Bennett Institute for Public Policy. Jeni is the co-chair of the Data Governance Working Group at the Global Partnership on AI, and sits on the Boards of Creative Commons, the Global Partnership for Sustainable Development Data and the Information Law and Policy Centre. She has a PhD in AI and an OBE for services to technology and open data.