Beyond the Black Box: Toward Transparent and Understandable Machine Learning

Monday, February 20, 2017: 9:00 AM-10:30 AM
Room 304 (Hynes Convention Center)
Machine learning is an enabling technology and is beginning to “leave the lab” to underpin new applications in diverse areas such as medicine, public services, finance, and even personal assistants. Yet some of the most advanced methods, including deep learning, suffer from interpretability and transparency concerns. Algorithms cannot explain the results they produce to the scientists creating the algorithms, or to the end users, creating a “black box.” This result could delay the widespread uptake of machine learning systems in society if public acceptance rests on understanding these new technologies. The issue of understandable machine intelligence is scientifically interesting and also increasingly important. Machine intelligence could require machines to have an understanding of causality, resulting in entirely new models of the world and new philosophical questions about what it means to be human and could define how human-machine collaborations will develop in the future. These issues require careful consideration today. Can algorithms be unexplainable and accountable? When causality isn’t possible, is explanation without causality sufficient for the public? This session will explore cutting-edge research on the black box of machine learning and its implications for science policy and society.
Organizer:
Natasha McCarthy, The Royal Society, London
Moderator:
Peter Donnelly, University of Oxford
Speakers:
Anders Sandberg, Future of Humanity Institute
Machine Learning and the Human Race