It is hard to find a cluster of technologies that might change our societies more in the coming decades than artificial intelligence. These technologies can be a force of good (see for example the AI for Good Global Summit), but also carry many risks ranging from algorithmic bias to laying the foundation for surveillance states, supporting automated warfare and so on.
To grapple with some of these opportunities and risks, more than 20 countries have formulated AI policies in the past two years. Interestingly enough, Switzerland is one of the few western countries that have not. In addition, several international fora (including the UN, EU, G7, and OECD) are drafting or have published “principles and guidelines” for an ethical trajectory of a future with AI.
Studying these documents, we see some overlap of shared principles. A curious case in point is the principle of “inclusion”. It is important, because AI has the potential to heighten inequities on many levels: between corporations, governments and citizens, between people with access technology and those without, between richer and poorer countries, etc. Inclusion – be it in research, development, deployment and governance – should soften some of these effects.
For example, the newly published Ethics guidelines for trustworthy AI by the European Commission states that “we must enable inclusion and diversity throughout the entire AI system’s life cycle” and mentions as stakeholders all “those involved in making the products, the users, and other impacted groups”, including “society at large”. The “Charlevoix common vision for the future of AI” issued by G7 leaders speaks of “involving women, underrepresented populations and marginalized individuals as creators, stakeholders, leaders and decision-makers at all stages of the development and implementation of AI applications”. The OECD will publish its principles and guidelines in spring 2019 with chapters on “inclusive growth” and “fairness”.
We see this principle also in documents in the civil society space: The Asilomar Principles, the Toronto Declaration, and the Montreal Declaration, for example, put a heavy focus on inclusiveness. Even in industry documents we find the principle, albeit a bit softened: Microsoft makes it a central notion of its AI principles, while Baidu emphasizes “equal access” and Google simply commits to “working with different stakeholders”. The 8 tenets of the Partnership on AI (a platform for over 80 industry leaders and non-profits in the AI sector) include “actively engaging stakeholders” and “striving to understand and respect the interests of all parties that may be impacted by AI advances”.
Is this enough for an inclusive future of artificial intelligence? Probably not, because despite the prevalence of the principle in all these documents, there is very little clarity on what inclusion actually means, how it should be operationalised, who should be included when, where and by whom. Given the global scale and fast pace of technological development, these questions may determine the trajectory of humanity. Without clear operational guidelines and enforcement mechanisms, all these principles will remain lofty visions without practical significance.
There is thus an urgent need for concrete ideas on how to operationalize the principle of inclusion in practice. foraus, swissnex and AI Commons have initiated a global ideation campaign on this topic: making use of the policy innovation platform Policy Kitchen and initiating workshops in eleven different countries on four continents with people of different backgrounds, we are collecting concrete ideas for initiatives, models and policies to realize this principle in practice. The best ideas will be integrated at the AI for Good Global Summit and pushed to decision makers in governments, international organizations, industry and civil society. You can contribute your ideas here: policykitchen.com/inclusiveai.