Workshop Explores How Artificial Intelligence Can Be Engineered for Safety and Control
Carnegie Mellon and White House Office of Science and Technology Policy Will Co-host
By Byron Spice
Artificial intelligence has the potential to benefit humankind in diverse and deep ways, but only to the extent that people believe these smart systems can be trusted. The technical means for ensuring AI systems operate in a safe, controlled manner will be the focus of a June 28 workshop at Carnegie Mellon University.
Co-hosted by the university and the White House Office of Science and Technology Policy (OSTP), "Safety and Control for Artificial Intelligence" will bring together 17 technical experts from across the country to address this issue, considered by some in the field to be more important to the future of AI than improvements in AI algorithms themselves.
"We have a window of opportunity to take on these issues of safety and control in a fundamental way, and to explore how we can integrate safety and control considerations into the emerging practice of design and engineering for AI-based systems," said William Scherlis, director of the Institute for Software Research in CMU's School of Computer Science and organizer of the workshop.
"Because we are at a formative stage in the development and deployment of AI systems, we have an opportunity to develop and promote engineering practice for AI-based systems that integrates safety and control. Of course, we want to do this without too much compromise to functionality and learning capability. In fact, we should aspire to enhance the productivity of AI engineers while also improving safety," Scherlis said.
The safety challenge is significant because of both the complexity of the AI systems and the richness of their interactions with their human users and operating environments.
How can we ensure the safety of an autonomous system that interacts in complex ways with human users and operators? A soon-to-be-familiar example could be a vehicle driving through a highway construction zone on a rainy night when it wants to gain the attention of a human driver and pass over control. How can we ensure that systems remain safe as they learn and adapt — changing their behavior — through machine learning? And how can we ensure that AI systems will remain safe as they interact in complex ways with separately developed AI systems that are themselves also learning and adapting?
This is the third of four public workshops the OSTP is co-hosting this year with various organizations to spur public dialogue on artificial intelligence and machine learning, and to identify challenges and opportunities related to AI.
The CMU/OSTP workshop will feature experts in AI algorithms, safety and systems engineering, and mathematical modeling. Presenters come from government, industry and academic institutions, including the Defense Advanced Research Projects Agency (DARPA); the National Science Foundation; the National Security Council; the Intelligence Advanced Research Projects Agency; Google; Microsoft; Uber; ZF TRW; Vanderbilt University; Tufts University; Oregon State University; the University of California, Berkeley; and Carnegie Mellon.
Speakers will include John Launchbury, director of DARPA's Information Innovation Office; Jeannette Wing, corporate vice president of Microsoft Research; Manuela Veloso, the new head of CMU's pioneering Machine Learning Department; and Tom Dietterich, president of the Association for the Advancement of Artificial Intelligence
CMU will issue a report on the workshop late this summer. Additionally, dialogue from this and the other three AI workshops is expected to feed into a public report on AI that OSTP will issue later this year.
For further information and to register for the workshop, visit the event website.