Robot Morals and Human Ethics

Sunday, February 14, 2016: 8:00 AM-9:30 AM
Wilson A (Marriott Wardman Park)
Wendell Wallach, Yale University, New Haven, CT
Throughout 2015, headlines have emphasized risks and dangers posed by the development of artificial intelligence and robotics including: the need for an international ban on lethal autonomous weapons, the introduction of drones into public airspace, the downward impact increasingly intelligent systems will have on wages and jobs (technological unemployment), life and death situations driverless cars will encounter, and the future possibility of superintelligence (a technological singularity).  In particular, recent breakthroughs in AI using a technique called “deep learning” have renewed expressions of concern about the eventual advent of superintelligence.  While the press has blown warnings from Stephen Hawkings, Elon Musk, Stewart Russell, and others out of proportion, these warnings have helped generate interest in a new research trajectory directed at the development of truly beneficial, controllable, and robust AI.  That new trajectory is often referred to by AI researchers as “values alignment” and incorporates earlier work done by applied philosophers and computer scientists in an emerging field called “machine morality” or “machine ethics.”       

The community of AI researchers already draws heavily upon work in cognitive science, evolutionary biology, and neuroscience. However, ensuring the safety of sophisticated forms of artificial intelligence will require more than the development of new algorithms. AI safety will also require input and collaboration between scholars and engineers from a wide variety of other fields including (but not limited to): machine morality/machine ethics, social robotics, the management of complex systems and resilience engineering, cost/benefit analysis, risk analysis, testing and verification, robot law, and oversight and governance. However, scholars and scientists in AI and these additional fields commonly work within their own intellectual silos. They seldom know much about work progressing in complementary fields, thus losing opportunities for collaboration and often reproducing the work of others.  

This presentation will outline that project and underscore policy recommendations for insuring that developments in AI/Robotics do not slip beyond our control.  Some of these policy recommendations have been more fully described in Wendell Wallach recent publications. They include:

 * Direct 10% of funding for research in AI/robotics toward studying, managing, and adapting to the societal impact of intelligent machines. 

* Create an oversight and governance coordinating committee for AI/robotics. The committee should be mandated to favor soft governance solutions (industry standards, professional codes of conduct, etc.) over laws and regulatory agencies in forging adaptive solutions to recognized risks and dangers. 

 * Call for a Presidential Order declaring that in the view of the U.S. lethal autonomous weapons systems (AWS) violate existing international humanitarian law (IHL).