In 2018, when Google employees found out about their company's involvement in Project Maven, a controversial US military effort to develop AI to analyze surveillance video, they weren't happy. Thousands protested. "We believe that Google should not be in the business of war," they wrote in a letter to the company's leadership. Around a dozen employees resigned. Google did not renew the contract in 2019.
Project Maven still exists, and other tech companies, including Amazon and Microsoft, have since taken Google's place. Yet the US Department of Defense knows it has a trust problem. That's something it must tackle to maintain access to the latest technology, especially AI—which will require partnering with Big Tech and other nonmilitary organizations.
In a bid to promote transparency, the Defense Innovation Unit, which awards DoD contracts to companies, has released what it calls "responsible artificial intelligence" guidelines that it will require third-party developers to use when building AI for the military, whether that AI is for an HR system or target recognition.
From MIT Technology Review
View Full Article
No entries found