Wallops Island—a remote, marshy spit of land along the eastern shore of Virginia, near a famed national refuge for horses—is mostly known as a launch site for government and private rockets. But it also makes for a perfect, quiet spot to test a revolutionary weapons technology.
If a fishing vessel had steamed past the area last October, the crew might have glimpsed half a dozen or so 35-foot-long inflatable boats darting through the shallows, and thought little of it. But if crew members had looked closer, they would have seen that no one was aboard: The engine throttle levers were shifting up and down as if controlled by ghosts. The boats were using high-tech gear to sense their surroundings, communicate with one another, and automatically position themselves so, in theory, .50-caliber machine guns that can be strapped to their bows could fire a steady stream of bullets to protect troops landing on a beach.
The secretive effort—part of a Marine Corps program called Sea Mob—was meant to demonstrate that vessels equipped with cutting-edge technology could soon undertake lethal assaults without a direct human hand at the helm. It was successful: Sources familiar with the test described it as a major milestone in the development of a new wave of artificially intelligent weapons systems soon to make their way to the battlefield.
Lethal, largely autonomous weaponry isn't entirely new: A handful of such systems have been deployed for decades, though only in limited, defensive roles, such as shooting down missiles hurtling toward ships. But with the development of AI-infused systems, the military is now on the verge of fielding machines capable of going on the offensive, picking out targets and taking lethal action without direct human input.
So far, U.S. military officials haven't given machines full control, and they say there are no firm plans to do so. Many officers—schooled for years in the importance of controlling the battlefield—remain deeply skeptical about handing such authority to a robot. Critics, both inside and outside of the military, worry about not being able to predict or understand decisions made by artificially intelligent machines, about computer instructions that are badly written or hacked, and about machines somehow straying outside the parameters created by their inventors. Some also argue that allowing weapons to decide to kill violates the ethical and legal norms governing the use of force on the battlefield since the horrors of World War II.
From The Atlantic
View Full Article
No entries found