Responding to too-frequent reports of shootings in the U.S., technology firms are beginning to roll out systems designed to quickly detect firearms in public settings.
Many of the systems offer an advantage over conventional metal detectors, given that they often include video surveillance that can spot a gun in plain sight in a crowd, surveillance a metal detector may not be able to duplicate.
Plus, some of the systems can utilize facial recognition to identify known persons of interest, such as known sex offenders, gang members, and the like; another bit of surveillance currently beyond the reach of conventional metal detectors.
The Canadian firm Patriot One Technologies, for example, offers a solution featuring a microwave radar scanner driven by artificial intelligence (AI) that can detect hidden weapons, along with a video surveillance component. "Our belief and strategy is to create a layered, multi-sensor approach to threat detection," says the company's CEO, Martin Cronin.
The Patriot One system works by beaming radio waves at individuals, which bounce off any guns concealed beneath their clothing or stashed in backpacks or other luggage. The system processes the radio waves that bounce back from an individual with AI software trained to recognize what a radio wave bouncing off a hidden gun. The system also incorporates AI video surveillance software component, which has been trained to identify guns in plain sight of video cameras.
When the Patriot One system detects a gun, it alerts security and/or other personnel receive instantly via email or text message.
Like many AI applications, the system is not perfect and can generate false-positives; in this case, a false report that a specific individual is harboring a hidden gun. Yet the AI software is designed to learn from its mistakes, and better recognize threats (and not-threats) the more it is used.
"We are now in the process of adding knives and bomb pictures/images to the database," Cronin says. The solution, he adds, "is only looking for threat items or activities—guns, rifles, knives, and bombs, or disturbances—and not conducting any facial recognition."
Not surprisingly, a number of vendors are seeking to provide next-generation gun detection capabilities to schools and other organizations.
Bellevue, WA-based Virtual eForce, for example, makes an AI gun detection system similar to the one offered by Patriot One, although the solution can only detect guns in plain sight, and cannot detect hidden weapons. Essentially, Virtual eForce's solution uses AI software to study people in a crowd through conventional video cameras. When the AI solution detects a gun in plain sight, the system sends an alert via text, email, or a mobile app to security personnel; it also can be programmed to lock all the perimeter doors in a building or buildings with electronic access control systems.
Tel Aviv, Israel-based AnyVision also offers an AI video surveillance system trained to identify a gun in plain sight in a crowd. However, the AnyVision solution also provides facial, body shape, and appearance recognition, based on a database of pictures of students' faces, body shapes, and the clothing a student typically wears. When the system detects an anomaly, such as a student that suddenly appears at school one day wearing military clothing and carrying an oversized bag, it can send an alert to school security.
Another Canadian company, SN Technologies of Gananoque, Ontario, also markets AI video software trained to recognize guns in plain sight. This solution, however, can also scan school crowds using its facial recognition software to pick out registered sex offenders, suspended students, fired employees, gang members, or other unwanted people, based on a photographic database of those individuals.
Of these solutions, Patriot One appears to thread the needle in its attempt to protect the pubic from gun violence without stomping on personal privacy, according to industry watcher Yafit Lev-Aretz, an assistant professor of law at the City University of New York.
"The Patriot One system analyzes and 'profiles' weapons, and not people," Lev-Aretz says. "In that sense, the privacy violation seems negligible, especially when compared with other screening devices that are in use today."
The emergence of AI-driven video surveillance systems that unblinkingly watch—and, more importantly, analyze—physical spaces 24/7/365 is a reality for which current day law is not prepared.
"Businesses are free to use video surveillance in their stores and in public, in part because lawmakers assumed that watching video was too laborious and that most video would simply be erased because of storage concerns," says Chris Jay Hoofnagle, a professor at the University of California, Berkeley and faculty director of its Berkeley Center for Law & Technology.
However, "Machine learning and computer vision are making it possible to understand video that otherwise would never be 'seen.'" Hoofnagle adds. "The law does not anticipate these forms of post-collection analysis."
Moreover, systems like Patriot One might run into privacy issues when used in a government-controlled setting, according to Eric Goldman, a professor at Santa Clara University School of Law and co-director of its High Tech Law Institute. "When deployed by the government, it could constitute a 'search,' and the Constitution or statutes may prohibit such automated searches without any suspicion," Goldman says.
Some industry watchers are concerned about the false positives such AI-driven surveillance systems can generate, along with the tragic consequences that can follow.
"For example, there have been numerous tragedies due to 'swatting,' when law enforcement receives a false tip of an active shooter and responds by storming the house with guns drawn, sometimes destroying houses and shooting innocent people," says Santa Clara's Goldman.
Fortunately, many technology companies are not tone-deaf to the privacy, safety, and other concerns associated with their AI. Patriot One's Cronin, for one, says he has met with the American Civil Liberties Union in the U.S., to discuss the technology underlying its solution. "Our desire is to maintain a person's privacy until they pose a threat, and we are keen to ensure maximum public acceptance for our approach."
Plus, there are ways to build-in system safeguards that could go a long way towards mitigating concerns about privacy, false positives and other shortcomings, according to industry watchers.
For example, AI video systems could be designed to obscure the faces of innocent people being surveilled to ensure security personnel don't resort to profiling, says Berkeley's Hoofnagle. "More generally, the privacy impact is reduced if the system discards all data after some short period, so that long-term profiles are not generated about individuals."
Perhaps the best way to 'sell' the public on the emergence of AI-driven video surveillance would be to pull back the veil of mystery and give everyone a complete view of what's going on inside the 'black box' of their AIs.
Says Sascha Meinrath, president of the grassroots advocacy organization Defending Rights and Dissent Foundation, "Public disclosure of AI's false positives and false negatives, and a careful independent review of the operation of these technologies, is the only way we can honestly assess that human biases, and occasional illegal discrimination, isn't being obfuscated within so-called AI. Only with full public disclosure of error rates can we determine the true efficacy of these systems."
Joe Dysart is an Internet speaker and business consultant based in Manhattan, NY, USA.
No entries found