By Stephen Losey,
Published by Military.com, 14 July 2021
Defense Secretary Lloyd Austin on Tuesday spelled out the Pentagon’s plan for using artificial intelligence, or AI, to deter or fight wars — but pledged that the military will fix or pull the plug on any system that gets out of line.
“We’re going to immediately adjust, improve or even disable AI systems that aren’t behaving the way that we intend,” Austin said in a speech at the National Security Commission on Artificial Intelligence’s summit on global emerging technology in Washington, D.C.
Austin said that the Pentagon now has more than 600 AI projects in the works, “significantly more than just a year ago.” The Defense Advanced Research Projects Agency, which has a long track record in such research, alone has more than 60 programs using AI, including systems that find and fix cybersecurity weaknesses.
“And we’re just getting started,” Austin said.
The Pentagon plans to invest almost $1.5 billion in its Joint Artificial Intelligence Center to speed up the military’s adoption of AI, he added.
Austin said that adopting AI will not change America’s adherence to the laws of war and democratic principles.
He referenced a set of principles for responsible AI, originally drafted by an outside advisory board, that the Pentagon adopted in 2020. Those guidelines provide general concepts about AI, but don’t include specific rules as to how it might be used in weapons systems. Senior Pentagon officials repeatedly have declined to agree to any kind of restrictions on AI decision-making even in areas like lethal action.
Officials have argued that any constraints might harm the U.S.’ ability to keep up with rivals who are rapidly advancing their own use of AI in weapons.
Austin added that the military will watch for unintended consequences or bias from AI systems and will act if they start acting in ways they shouldn’t.
“We’re not going to cut corners on safety, security or ethics,” he said. “Our watchwords are responsibility and results. And we don’t believe for a minute that we have to sacrifice one for the other.”
Congress last September released a bipartisan report urging the military to redouble its efforts on adopting AI and autonomous systems, calling for a “Manhattan Project”-type initiative to do so. That report noted that China has set a goal of becoming the leading AI force in the world by 2030.
But the military’s increasing interest in AI has also raised ethics concerns, especially when it comes to experiments with weapons systems that could decide on their own which humans to kill in combat. The United Nations in 2013 began considering whether such systems should be banned, a year after Human Rights Watch and other groups launched an effort called the Campaign to Stop Killer Robots.
Austin said that AI will be central to the military’s strategy of “integrated deterrence,” or using a combination of technology, strategy and capabilities to discourage potential adversaries from acting against the United States’ interests.
AI will help the military make faster and better decisions, he said, work together across different domains, and get rid of old, outdated ways of doing business.
It’s not just the U.S. military working to adopt AI, Austin said. China is as well, and plans to use it for missions including surveillance, cyberattacks and autonomous weapons.
Critics have argued that the speed of AI decision-making will make it increasingly difficult for humans to monitor and potentially override their actions.
But, Austin said, the military must do a lot better at recruiting, training and holding on to talented people — particularly young people — who have the skills needed to develop AI. He said the Defense Department must create new career paths and incentives for them and include tech skills as part of its basic training programs.
The military also must radically change how it thinks about technology, he said. Some troops spend their day struggling with “virtually obsolete” laptops that are woefully behind what is available to the average consumer. And there are many college graduates and new Ph.D. recipients who would never consider working for the military, he added.
See: Original Article