A Silicon Valley start-up recently unveiled a drone that can set a course entirely on its own. A handy smartphone app allows the user to tell the airborne drone to follow someone. Once the drone starts tracking, its subject will find it remarkably hard to shake.
The drone is meant to be a fun gadget — sort of a flying selfie stick. But it is not unreasonable to find this automated bloodhound a little unnerving.
On Tuesday, a group of artificial intelligence researchers and policymakers from prominent labs and think tanks in both the United States and Britain released a report that described how rapidly evolving and increasingly affordable A.I. technologies could be used for malicious purposes. They proposed preventive measures including being careful with how research is shared: Don't spread it widely until you have a good understanding of its risks.
A.I. experts and pundits have discussed the threats created by the technology for years, but this is among the first efforts to tackle the issue head-on. And the little tracking drone helps explain what they are worried about.
From The New York Times
View Full Article
No entries found