Robot Morals and Human Ethics
The community of AI researchers already draws heavily upon work in cognitive science, evolutionary biology, and neuroscience. However, ensuring the safety of sophisticated forms of artificial intelligence will require more than the development of new algorithms. AI safety will also require input and collaboration between scholars and engineers from a wide variety of other fields including (but not limited to): machine morality/machine ethics, social robotics, the management of complex systems and resilience engineering, cost/benefit analysis, risk analysis, testing and verification, robot law, and oversight and governance. However, scholars and scientists in AI and these additional fields commonly work within their own intellectual silos. They seldom know much about work progressing in complementary fields, thus losing opportunities for collaboration and often reproducing the work of others.
This presentation will outline that project and underscore policy recommendations for insuring that developments in AI/Robotics do not slip beyond our control. Some of these policy recommendations have been more fully described in Wendell Wallach recent publications. They include:
* Direct 10% of funding for research in AI/robotics toward studying, managing, and adapting to the societal impact of intelligent machines.
* Create an oversight and governance coordinating committee for AI/robotics. The committee should be mandated to favor soft governance solutions (industry standards, professional codes of conduct, etc.) over laws and regulatory agencies in forging adaptive solutions to recognized risks and dangers.
* Call for a Presidential Order declaring that in the view of the U.S. lethal autonomous weapons systems (AWS) violate existing international humanitarian law (IHL).