In mid-May, the Trump administration banned Huawei of Shenzhen, China, from selling its technology into the American Information Technology and telecom sectors on the grounds that it could pose a threat to U.S. national security. The White House fears the firm's 5G mobile networks, in particular, could be used to pass commercial and military secrets to Beijing under China's sweeping 2017 National Intelligence Law, which mandates that networks pass on data to the government if asked to do so.
While Huawei consistently maintains it would never comply with such a request, doubt hangs over its ability to resist such a demand from the totalitarian state's intelligence services.
Yet Trump's ban on Huawei quickly led to unintended consequences: it emerged that Huawei's handset business could lose very basic access to app and Android updates from U.S.-based Google for smartphones it launches in the future. Ironically for an issue all about trust in technology, this could result in users of future Huawei phones running unpatched, insecure apps.
"It may be a ticking security time bomb," says Eoin Keary, CEO of Edgescan, a Dublin, Ireland-based cybersecurity firm.
That issue, alongside U.S. chip industry concerns over component sales lost to Huawei, and a threat from Huawei that it may close its U.S.-based research lab at the cost of 850 jobs (the company announced July 23 it was laying off more than 600 of its workers in the U.S.), has the administration backtracking somewhat , leaving the situation one of great confusion.
Was Trump's move any way to boost trust in technology?
As the ramifications of the attempted ban continue to unfold, current computer security research suggests it will not provide the Trump administration with any kind of magic bullet that instantly provides trustworthy 5G. The reason: technical and economic change mean new fronts are opening up in the field of covert data theft, with computer firmware and hardware itself now vulnerable to attack, potentially in ways so difficult to detect that even systems from 'trusted' suppliers could leak confidential data to adversaries.
These new attacks include firmware trojans that maliciously modify the basic, built-in, boot-level control software in a system, perhaps sabotaging security by preventing software updates and vulnerability patches being applied, and/or allowing data exfiltration. Such attacks can be aided, it turns out, by the increasing use of a popular and versatile logic circuit: the field programmable gate array, or FPGA.
In addition, the globalization of the microchip supply chain means that chips are now designed, fabricated, tested, packaged, and delivered by different low-cost, outsourced providers all over the world—with the fabs often in China, even for Huawei's more trusted rivals—providing multiple points where small, maliciously inserted circuits called "hardware trojans" could be introduced.
"Supply chains for telecommunications networks have become global and complex," admits Norman Lamb, chair of the British parliament's Science and Technology Committee, after taking evidence from the industry. "Many vendors use equipment that has been manufactured in China, so a ban on Huawei equipment would not remove potential Chinese influence from the supply chain."
Sometimes comprising just a few hundred hard-to-find transistors amongst millions (or billions) of the devices, hardware trojans can act as service-denying kill-switches, or activate data leakage to attackers through paths (dubbed side channels) when activated. Finding them is difficult: "How can we interrogate a circuit for malice when we don't trust the circuit in the first place?" asked Kenneth Plaks, a program manager at the U.S. Defense Advanced Research Projects Agency (DARPA) working on ways to obfuscate chip circuit layouts so attackers cannot modify them.
Like DARPA, other security teams are concerned that hardware trojans need containment, too. "Users may believe their systems are secure if they run only trusted software. However, trusted software is only as trustworthy as the underlying hardware," warned Hansen Zhang and colleagues from Princeton University at April's ACM International Conference on Architectural Support For Programming Languages and Operating Systems (ASPLOS 2019) in Providence, RI. "Even if users run only trusted software, attackers can gain unauthorized access to sensitive data by exploiting hardware errors or by using backdoors inserted at any point during design, manufacture, or deployment."
That a serious firmware vulnerability (albeit an accidental one) can be a risk was revealed the same week the White House ban on Huawei was announced. Indeed, in what was a very bad week for cybersecurity all around, vulnerabilities were revealed for Microsoft, WhatsApp, Intel, and Cisco Systems products. It was Cisco's flaw that stood out as very different, however: cyberanalysts at Red Balloon Security in New York City found a vulnerability, which they have dubbed Thangrycat, in the Trust Anchor module that Cisco uses to securely boot many of its products, such as network routers, switches, and firewalls.
In Trust Anchor, the firmware of an FPGA is stored in a flash memory chip, rather than in a read-only memory (ROM). When fed to the FPGA, this firmware "bitstream" dictates the way logic gates are connected in a Boolean circuit to ensure that, at boot time, software updates and patches are applied. However, Red Balloon found that by modifying the bitstream, they could rearrange the FPGA's logic circuit "so an attacker can remotely and persistently bypass Cisco's secure boot mechanism and lock out all future software updates," the company says.
It was an important find, says Alan Woodward, a visiting professor of cybersecurity and digital forensics at the University of Surrey in Guildford, U.K. "This finding really matters. The root of trust in embedded devices quite often relies upon FPGAs. So if you can do this, you can effectively circumvent secure boot processes and you have a strong attack vector."
Andrew Tierney, a consultant with penetration testing firm PenTestPartners, in Buckingham, U.K., says Thangrycat revealed an exploitable gap in the product design. "The interesting aspects here are the mistakes that Cisco made in the implementation. It seems remiss to develop a secure boot system that doesn't provide secure boot." Most devices that do offer secure boot, he says, tend to do so using read-only data: in flash RAM, the bitstream was modifiable.
"We haven't seen many attacks against FPGAs yet. This is possibly due to obscurity; taking a bitstream, the firmware of an FPGA, reverse-engineering it, and then modifying it is very challenging," Tierney says, adding that it's also time-consuming and expensive.
Despite the challenges and expense, researchers remain concerned about future attacks on firmware, especially in the kind of critical infrastructure that will use a litany of communications links, the kind 5G can provide as the supposed future enabler of Internet of Things applications. After all, the U.S./Israeli attack on Iran's Natanz nuclear enrichment plant using the Stuxnet worm —which allegedly shook 400 uranium centrifuges to pieces by injecting them with sabotaged motor control data—proved the viability of firmware attacks on embedded programmable logic.
A firmware trojan family tree
As a result, a team from the New York University Polytechnic School of Engineering, led by Charalambos Konstantinou, has drawn up a whole taxonomy of what forms firmware trojans could feasibly take to disable a sample critical infrastructure application (such as a smart power grid) so they can be used to establish defense mechanisms against their distribution in smart grid testbeds.
Usefully for security researchers, the school developed a raft of sample firmware trojans, too, with insertion mechanisms varying from those delivered via test ports, via simple communications links or, for the really determined money-no-object attacker, "chip-off forensics." In the latter, the top of a memory chip is removed and the die exposed, allowing data to be read, rewritten, and reinjected, to scurrilous ends.
Where attackers do the delivery is also an issue. Stuxnet relied on simple social engineering, with USB sticks left in public places like cafes and car parks near the target plant. However, delivering a threat based on something like Thangrycat's mechanism would be way tougher. "It's not that easy to exploit, as you need administrator access, but if you were in the equipment supply chain somewhere, that might be possible," says Woodward.
It is in the supply chain—the extended, global, highly outsourced, out-of-sight/out-of-mind semiconductor microchip one—that specialists believe hardware trojans are going to hail from. One type of hardware trojan could simply sabotage the chip's chemistry, so connections or transistor channels burn out, so the chip fails after a set time; effectively, a killswitch on a timer. More likely, others might use a trigger signal to activate circuitry that has been added to a chip at some stage of manufacture such that it delivers a result (a disabling killswitch command perhaps, or data in a memory chip such as a stored cryptographic key).
To do that, however, attackers need to know the circuit layout so they can see, for instance, where they can place their malicious circuit elements. That is something that can be fought, Johann Knechtel and colleagues at the Tandon School of Engineering at New York University told the International Conference on Omni-Layer Intelligent Systems (COINS) on the Greek island of Crete, in early May.
In their COINS paper, the NYU team revealed how design can effectively be frozen by describing the list of logic gate connections, known as the 'netlist' to IC designers, in code, then encrypting and storing it on the chip in tamper-proof memory. At runtime, the code is compared to a code that is dynamically generated by the in-use circuit; the code hashes should be the same if the chip is untainted.
In another technique, dummy logic gates can be added to the circuit to camouflage the physical design, altering its appearance and limiting the sites where trojan creators can place things.
The NYU team is not alone. "By using a combination of techniques, our goal is to make the placement and triggering of hardware trojans more difficult and their detection easier," says DARPA's Plaks.
One of the countermeasures the Defense research unit is investigating involves placing a radio frequency (RF) intrusion sensor circuit on a chip die. Made purposely fragile, it breaks if the chip is tampered with, disabling the device.
In addition, Plaks says DARPA is investigating how the addition of trojan transistors and logic gates affects the timing of circuits, as parasitic capacitances from malicious add-ons will change it in hopefully telltale ways.
Zhang's team at Princeton is tacking another tack: they have developed TrustGuard, a sentry-like system that checks that microchips only issue data in formats expected by the design, raising an alarm if out-of-the-ordinary transmissions, such as data leaks on side channels, occur.
All the anti-trojan measures need resources, and so have speed and power-drain impacts, however (TrustGuard, for example, reduces performance by 15%), so hardware trojan defenses are very much works in progress in need of improvement.
Yet if all this innovation in hardware and firmware trojan countermeasures tells us one thing, it is that trust in technology is a moveable feast. Like all of cybersecurity to date, it is an arms race, and all the manufacturer bans in the world will not guarantee systems are trustworthy now that, thanks to globalization, device manufacture has been cast to the four winds.
Paul Marks is a technology journalist, writer, and editor based in London, U.K.
No entries found