Keynote
in
Workshop: AI For Social Good (AISG)
Creating constructive change and avoiding unintended consequences from machine learning
This talk will give an overview of some of the known failure modes that are leading to unintended consequences in AI development, as well as research agendas and initiatives to mitigate them, including a number that are underway at the Partnership on AI (PAI). Important case studies include the use of algorithmic risk assessment tools in the US criminal justice system, and the side-effects that are caused by using deliberate or unintended optimization processes to design high-stakes technical and bureaucratic system. These are important in their own right, but they are also important contributors to conversations about social good applications of AI, which are also subject to significant potential for unintended consequences.
Speaker bio: Peter Eckersley is Director of Research at the Partnership on AI, a collaboration between the major technology companies, civil society and academia to ensure that AI is designed and used to benefit humanity. He leads PAI's research on machine learning policy and ethics, including projects within PAI itself and projects in collaboration with the Partnership's extensive membership. Peter's AI research interests are broad, including measuring progress in the field, figuring out how to translate ethical and safety concerns into mathematical constraints, finding the right metaphors and ways of thinking about AI development, and setting sound policies around high-stakes applications such as self-driving vehicles, recidivism prediction, cybersecurity, and military applications of AI. Prior to joining PAI, Peter was Chief Computer Scientist for the Electronic Frontier Foundation. At EFF he lead a team of technologists that launched numerous computer security and privacy projects including Let's Encrypt and Certbot, Panopticlick, HTTPS Everywhere, the SSL Observatory and Privacy Badger; they also worked on diverse Internet policy issues including campaigning to preserve open wireless networks; fighting to keep modern computing platforms open; helping to start the campaign against the SOPA/PIPA Internet blacklist legislation; and running the first controlled tests to confirm that Comcast was using forged reset packets to interfere with P2P protocols. Peter holds a PhD in computer science and law from the University of Melbourne; his research focused on the practicality and desirability of using alternative compensation systems to legalize P2P file sharing and similar distribution tools while still paying authors and artists for their work. He currently serves on the board of the Internet Security Research Group and the Advisory Council of the Open Technology Fund; he is an Affiliate of the Center for International Security and Cooperation at Stanford University and a Distinguished Technology Fellow at EFF.