Why We Invested: ACM Conference on Fairness, Accountability and Transparency
Algorithmic systems have come to pervade many parts of our lives, increasingly used to make life-changing decisions in areas such as healthcare, criminal justice, and employment, to name but a few. There is, at the same time, growing recognition that these systems run the risk of entrenching and amplifying biases and increasing power asymmetries between those that use them to make decisions and those subject to these decisions. The past few years has offered a steady stream of examples of these systems causing harm, often disproportionately affecting already marginalised communities.
The remarkable growth in public attention to the risks of algorithmic systems has led to more pointed debate, increased regulatory scrutiny, and a growing number of civil society and industry initiatives. The social challenges related to the development and use of algorithmic systems today bring into sharp focus the urgent need for interdisciplinary approaches. It is important that disparate and often siloed disciplines and communities, including researchers, practitioners, policymakers, and advocates are brought together to work together to develop the insights and tools necessary to ensure that these technologies follow appropriate safeguards, standards, and design practices.
That is why we are supporting the ACM Conference on Fairness, Accountability and Transparency with a core grant of $500,000 over three years. The conference provides a platform for peer-reviewed research exploring the impacts of algorithmic systems and aims to bridge the gap between technical fields and disciplines in the social sciences, humanities, and law and policy, to ensure emerging research, insights, and solutions meaningfully address the social nature and implications of algorithmic technologies. Its main objective is to foster and highlight academic research that advances our understanding of the potentially discriminatory impact of algorithmic systems, the challenges of holding these systems and their developers and operators to account, and the increased information asymmetries that often accompany their adoption.
The first event following Luminate’s funding was held in Atlanta in January this year, and welcomed over 500 participants. It opened with the inaugural Doctoral Consortium to support and promote the next generation of scholars. This was followed by a series of tutorials designed to translate issues between different disciplines (for example AI for international development); deep dives on the implications of the use of algorithmic systems in real life (such as algorithmic risk assessments); and hands-on explorations of emerging techniques (like a toolkit for AI fairness). The following two days kicked off with keynote addresses from Jon Kleinberg and Deirdre Mulligan, and were packed full of presentations of selected research papers covering a wide range of topics and perspectives. Papers presented included those that explored the cultural and social contexts in which different fairness definitions have emerged, presented a framework for greater transparency of the performance characteristics of machine learning models, exposed the racial bias in an algorithm that guides health decisions for over 70 million people in the US, and analysed China’s social credit system. The final day also included an engaging and participatory Town Hall session that brought attendees into a conversation about the future of the event, and surfaced valuable perspectives and suggestions on ways to strengthen the conference and create a truly inclusive space for engagement and collaboration in future years.
Interest in the conference and broader field continues to grow, with an increased number of research papers submitted for consideration this year and a large proportion of first-time attendees. We look forward to seeing how the conference, and the community around it, continues to develop over the coming years. As the conference looks ahead to Barcelona in 2020, opportunities abound to bring in a more diverse set of participants and perspectives and to cultivate an engaging forum for developing deeper understandings of the sociotechnical nature of algorithmic systems.
You can watch the 2019 conference proceedings and access all papers here.