The company’s chief executive officer, Sundar Pichai, announced the decision in an internet post. He wrote that the new policy was one of several newly-launched “principles” aimed at guiding the company’s AI work in the future.
The principles are a set of ethical guidelines covering the company’s development and sale of AI technology and tools.
Google says it will no longer design or launch AI for weapons or other technologies whose main purpose is to cause harm to people. It will also not permit its AI technology to be used for surveillanceactivities that violate “internationally accepted norms.”
“We believe these principles are the right foundation for our company and the future development of AI,” Pichai wrote.
The principles were announced after more than 4,000 Google employees signed a document calling for the company to cancel an AI agreement with the U.S. Department of Defense. That agreement, known as Project Maven, involves the use of Google’s AI technology to examine drone images for the U.S. military.
A Google official recently told employees Project Maven would not be extended after it ends next year. Google is expected to discuss with military officials how to complete the project without violating its new principles.
Kirk Hanson is director of the Markkula Center for Applied Ethicsat Santa Clara University in California. The center examines how ethics can be used to guide technology development.
He told VOA the opposition by Google employees to the U.S. military agreement was based on fears that AI technology could lead to the creation of “autonomous weapons.”
“If you have artificial intelligence which identifies targets and automatically launches weapons, you have what is known as an autonomous weapon -- there is no human decision to launch the weapon.”
Hanson said other companies could also face pressure from employees or the public if their AI technology is used to develolp autonomous weapons. Just as with driverless vehicles, autonomous weapon systems may not be as safe as their supporters promise.
“We should be more concerned about how an autonomous weapon might make a mistake. Is that artificial intelligence targeting system as good as we think it is? And until we have trust that those systems will not make mistakes, we're going to have a lot of doubtsabout the use of artificial intelligence.”
Hanson says even though Project Maven does not directly use Google AI to power autonomous weapons, AI systems do help with military targeting.
“If you have better targeting, presumably that's a good thing. But the critics say if you have better targeting, it raises your level of confidence in the targeting, which may lead you to then applyindependent autonomous decision making by machine, which will launch the weapons.”
A top Department of Defense official was asked about the use of autonomous weapons during an event last year at the Center for Strategic and International Studies in Washington. Air Force Gen. Paul J. Selva, vice chairman of the Joint Chiefs of Staff, said such systems should never be used to replace human commanders.
Google chief Pichai said the company does not plan to stop providing AI technology for all military uses. He said Google will still seek government projects in areas such as military training, internet security and search and rescue.
More about: Google