The ILPC Seminar – Toward a Global Index for Measuring the State of Responsible AI took place on 6 September 2023.  The event kicked off with a Keynote Lecture by Dr Rachel Adams. Her lecture presented a new project underway to develop a Global Index on Responsible AI – a rights-based tool to support a broad range of actors in advancing responsible AI practices. It is intended to provide a comprehensive, reliable, independent, and comparative benchmark for assessing progress toward responsible AI around the world.

Project Rationale

Speaking about the rationale for the project, Dr Adams explained that with the rapid rise of AI-driven technologies in recent years, advances have been made in developing principles, guidelines, and regulations to govern their development and use, but there was nothing that was seeking to address or examine the implementation of these principles. She highlighted that majority world experiences and expertise are not adequately reflected in global tools on responsible AI and that there is an urgent need to diversify the concept of ‘Responsible AI’ to serve a truly global agenda and ensure that the development and use of AI in all parts of the world is inclusive and rights-respecting.  Another crucial reason for the development of this project is that historically indexes have not always been useful to the Global South and this project seeks to put their needs at the forefront.

Dr Adams then gave an overview of the objectives and aims of the project, and how a definition for Responsible AI was reached. She explained that the project seeks to address the need for inclusive, measurable indicators that reflect a shared understanding of what responsible AI means in practice and track the implementation of responsible AI principles by governments and key stakeholders.

 What is “Responsible AI”?

To reach a definition of Responsible AI and to understand what Responsible AI means and looks like to different groups around the world, extensive consultations were held with groups largely in the Global South. The consultations revealed that Responsible AI must address the full AI life cycle and value chain; human rights must extend beyond civil and political rights to include social and economic rights, environmental rights, collective rights, labour rights, and children’s rights; and that the responsibilities of the private sector (and the role of the state in determining and enforcing these) must be fully addressed.

Taking all of these into account, a definition was reached: “The responsible development, use and governance of AI requires every step be taken to ensure our planet and human communities are not adversely affected by these new technologies, and are used to benefit human development and democratic engagement worldwide.” This definition and the consultation results provide a constructive framework for the project and for evaluating the efficacy of the first instrument/index that is developed.

Dr Adams then discussed the methodology of the project. The main methodology that the project advances is the completion of an expert survey by researchers around the world. The project is developing an extensive network of researchers around the world that would monitor what is happening in their respective countries and contribute to the debates and discussions in this area. The project is wide-ranging, covering 140 countries. Coordinated by the core team based at the African Observatory on Responsible AI, regional hubs will be engaged for key research tasks in their regions such as validating indicators locally, recruiting national researchers and overseeing them, supervising data collection and data quality, and disseminating results.

The data collected from the surveys will be scored, and calculations and analysis will be performed in well-known data analysis languages and tools to ensure reproducibility of findings and trends. All data, including reports and other outputs of the study, will be openly accessible under Creative Commons Attribution 4.0 International License. Dr Adams highlighted that the project follows a participative approach, with a wide-range of global stakeholders being consulted to ensure that perspectives of underserved and marginalised groups are incorporated.

Lastly, Dr Adams discussed a pilot that is currently in operation. As Responsible AI is a new and emerging field, and as the Global Index questionnaire is being used for the first time and addresses topics that are tricky to assess, it was considered important to test it. The test has so far revealed that the questionnaire as it is, would take longer than anticipated to complete and hence the scope of it has to be reduced. Concluding her presentation, she set out the timeline of the project – a capacity development programme taking place in October, Data collection from November to February, and thereafter, analysis and review of the data collected.

Legal practice and historical perspectives

The panellists, Dr Susie Alegre (Doughty Chambers) and Professor Catherine Clarke (IHR), then discussed their thoughts on the project and how it contributes to the wider conversations and debates surrounding the use of AI. Both panellists admired the work and research being undertaken to develop the index and commented on its vast scope and scale. They agreed that having a Global Index on Responsible AI was incredibly important and that the project has the capacity to have a positive, practical, and real-world impact. In particular, Dr Alegre highlighted some the work that she is doing in relation to AI. She contended that one of the key questions about AI is not necessarily what it is designed for, but how it is being used, perceived, and delivered on the ground. Specifically, she spoke about the use of ChatGPT in the justice system and the impact of AI on the right to fair trial. According to her, the Global Index would be useful to understand what is happening on the ground.

Professor Clarke, speaking from a humanities perspective, highlighted the need for AI literacy and the importance of acknowledging cultural sensitivities and differences when looking at AI. She then spoke about the Indigenous AI project and the challenge of creating benchmarks for Responsible AI that are capacious enough to acknowledge and accommodate vibrant cultural differences. She praised the vast scale of the project and the fact that it is a truly diverse and inclusive project.

Overall, the event provided leading expertise and insights into a timely and crucial area of law and policy. It highlighted how in order to make progress in advancing responsible AI, it is crucial to know and understand the current state-of-play, as well as to track progress over time. It provided valuable and profound insight into the work that went into bringing such a large-scale project to life.