Computer scientists in New York have developed an ingenious proof-of-concept cyberattack in which a $3 magnetic sensor reveals the detailed architecture of deep learning neural networks, allowing the designs of valuable artificial intelligence (AI) systems to be pirated.
This extraordinary proof-of-concept attack, which poses a major risk to intellectual property in the burgeoning AI market, was developed by Ph.D. student Henrique Maia and colleagues in the Columbia Computer Graphics Group at Columbia University in New York City, along with Dingzeyu Li at Adobe's Seattle research lab, and Eitan Grinspun at Canada's University of Toronto.
In August, at the 2022 USENIX Security Symposium in Boston, MA, the team will reveal how a 4mm-square magnetic flux sensor was applied to the power cable of multicore graphical processor units (GPUs) running a variety of deep neural networks. The result of that application, the researchers found, was that the timing and magnitude of the electrical signals induced by the magnetic flux coursing through the sensor power cable "betrays the detailed topology," or architecture, of the nodes, layers, and neuronal weights of a deep neural network.
"We found that one sensor placed after the power supply, but before one or many GPUs, is enough to betray the signal and subsequently architecture of networks," Maia says. "Even in the case of a GPU stack, where the power cord forks across multiple cards running the same operations in a distributed fashion, a single sensor, either before or after the power cable splits, sufficed to produce meaningful traces."
The team began its research into the feasibility of this magnetic "side-channel" after noting that deep learning systems—used in applications such as speech recognition, image identification, robotics, and drug discovery—are trained and "tuned meticulously" at great expense. Given the commercial value of deep learning systems, the researchers wondered what kind of attacks AI developers should plan to defend against if they want to protect their investments in deep learning networks (DNNs).
DNNs take as their input an array, or matrix, of data—perhaps representing an image or a spoken word—and output an 'inference': a strong, informed guess as to what that input is. The DNN can do this because it has been trained with very large quantities of known data. The training process alters the strengths of neuronal connections, or weights, between a great many nodes on multiple operational layers within the network, each of which can have different logical functions chosen to provide the best performance for the task at hand.
An attacker wanting to copy a company's commercially successful DNN needs to discover the "shape" of the network, by finding the numbers of layers used, the sequence in which the functional layers appear, and each layer's functional type. They also need to discover a host of factors, called hyperparameters, that govern how each layer operates, and how those layers inter-operate, too.
The researchers speculated that the response to an attacker detecting all these facets probably lies in the magnetic sensing domain. The reason: deep neural networks operate in a modular, sequential way, layer by layer, so the timing of the magnetic signals induced by current pulses in the GPU power cable, as transistors flip in its many cores, should be very clear and distinct, compared to those of a conventional multicore CPU, where processes occur concurrently.
To test the idea, they placed a single-axis magnetic flux sensor (a Texas Instruments DRV425) against the power cables of four different types of GPUs running DNNs sometimes thousands of layers deep.
It worked. "Our prototype shows it is possible to extract both the high-level network topology and detailed hyperparameters," the team reported.
Says Maia, "The affordability of the sensor, together with an attacker's ability to capture signal traces and study and process them offline, mean even the smallest window of opportunity for snooping will pose a threat to the intellectual property and privacy of neural architectures."
If the attack were real, Maia imagines it being executed remotely, with the sensor integrated on a tiny Arduino-type circuit board "with either Wi-Fi/radio/bluetooth capability to transmit signals for collecting and processing elsewhere."
This threat to AI systems is being taken seriously, and two leading GPU makers have been warned about it, says Maia. "We have reached out to both nVidia and AMD to alert them of our findings. Both acknowledged our communication and ,in one case, my team and I entered into a lengthy Zoom discussion with their Product Security and Incident Response Teams, who were very curious about our findings."
Almost as ingenious as the attack are the defenses the researchers have devised for AI firms to adopt.
Maia explains, "A couple of different countermeasures can be taken: prevention and jamming.
"In prevention, we intersperse a few unnecessary operations throughout the network: dead ends that take in data, but produce output that is never used. This obfuscates the true logical path of data in the network, making the final logical sequence of layers hard to discern, but at the cost of taking longer to run the network.
"In jamming, we run other background processes on the GPU so as to add noise to the signal. We ran an increasingly large background operation concurrently with neural network inferences to show the effects of jamming. However, we found that in our experiments, the networks tested could still be recovered reliably, with around 80% accuracy, unless the background process took up 45% or higher of the GPUs resources, which might not always be an option."
Such attacks could be difficult to execute, says Carsten Maple, a professor of cyber systems at the University of Warwick in the U.K., and a Fellow of the Alan Turing Institute in London. He explains, "The attacker needs to be within close physical proximity to the hardware, and this limits the risk the attack poses. The most likely realization of the attack would be from an insider who was able to access equipment, or someone somewhere in the supply chain who could insert the sensor.
"However, none of this means the work is not of interest and significance. The attack demonstrates well how measurements can reveal the structure and parameters of a deep neural network. Advances such as miniaturization of sensors and communications technology mean that supply chain attacks may become easier to apply without discovery, and advances in remote measurement, such as antennae that have advanced amplification capability, can remove the need for such close physical access."
In Poland, meanwhile, Wojciech Mazurczyk, a professor at the Institute of Computer Science, Division of Software Engineering and Computer Architecture, Faculty of Electronics and Information Technology of the Warsaw University of Technology (WUT), where he heads up the Computer Systems Security Group (CSSG), agrees with the Columbia authors that their futuristic neural network assault falls squarely into the old cyber category of "side-channel" attacks, in which an attacker "deduces sensitive information by abusing unforeseen information leakage from computing devices."
We can expect more such attacks: where it was once the case that side-channel attacks needed direct access to a victim's computing devices, Mazurczyk says that is changing fast. "Owing to the highly interconnected and virtual nature of modern hardware and software, side-channel attacks can be also operated in a completely remote manner, avoiding contact with the victim," he says.
Mazurczyk adds, "The potential attacker can obtain a lot of useful information leading to industrial espionage or the loss of intellectual property without being close to the targeted system, so the risk for the adversary is minimal."
Paul Marks is a technology journalist, writer, and editor based in London, U.K.
No entries found