Like the ‘horse of a different colour’ in Wizard of Oz, field-programmable gate array (FPGA) and customisable system-on-chip (cSoC) integrated circuits are the chameleon workhorses of the electronics world. Available in a variety of different sizes and arrangements, these can be programmed to perform almost any logic function from small to large. The tightly coupled general-purpose microprocessor and associated sub-system of the cSoC can be programmed with firmware to perform tasks best suited to sequential execution. The ever expanding number of surrounding dedicated logic blocks, memories and interfaces such as math-blocks, SRAM, non-volatile memory and high-speed serialiser-deserialiser (SERDES) interfaces complete this almost infinitely versatile device. But, with all this flexibility, can a high standard of security be maintained?
Before answering that question, let us first take up the question: What security objectives are we most likely interested about? Traditionally, FPGA and cSoC security is broken down into two distinct concepts: design security and data security.
In design security, the objective is to protect the interests of the IP owner—usually the original equipment manufacturer (OEM)—whose engineers designed the soft logic and firmware used to configure the device at the board- or system-level manufacturing step. Because of the investment in creating the FPGA or cSoC soft IP, the OEM wishes to keep its design confidential so that it can’t be copied or cloned.
Possibly worse would be when the logic or firmware design is reverse-engineered, and the OEM’s technology stolen and adapted for competitive products. When this happens, the value of the company may be slashed to a fraction of its pre-theft value, as happened to one US company when its main customer stopped placing orders and started making the systems themselves, allegedly by stealing its design. In this example, the value of the US company’s stock dropped 84 per cent.
Another design security concern is overbuilding. Overbuilding is where an unscrupulous contract manufacturer or his employees build more systems than the number authorised or received by the OEM, and then sell the excess systems for their own profit. Aside from depressing the market and profits for the legitimate manufacturer, the OEM may also find customers of the often substandard illicit units coming to them for warranty repairs. The nightmare scenario is being sued for defects in a system that you didn’t build but has your name written on it!
While it may be easy for these unscrupulous contract manufacturers to easily procure additional quantities of most components (including the FPGAs) in the system, because of the controls put in place by certain FPGA manufacturers to prevent overbuilding, and the expense in legitimately creating a comparable FPGA design from scratch, FPGAs can actually help reduce the occurrence of overbuilding.
An ever-increasing design security concern is the use of counterfeit or untrusted components. Due to the high cost of developing FPGAs, and the controls in place at the FPGA foundries, the chance of getting a functioning FPGA manufactured with an unauthorised mask set is fairly small. Also, since FPGA and cSoC ICs are unique in that a very large part of the IP is programmed into the device by the OEM, the risk of untrusted IP is largely moved to the configuration step rather than the foundry step.
But, a very real risk is the entry into the supply chain of used parts, or parts that are re-marked from authentic but less expensive versions to more expensive devices that should be designed or screened to faster speed grades, high temperature ranges or higher reliability levels. These parts may even work, part of the time. Certainly, the customer is not receiving the value that was paid for and is potentially risking system failures.
Data security moves the focus from the OEM’s IP to the data processed by the device. This is typically data owned by the OEM’s customer, or the customer’s customer, rather than the IP owned by the OEM. This is now into the realm of information assurance, i.e., protecting the data stored, processed or communicated by the device during run-time. Often, cryptographic methods are used to protect the confidentiality of this data, or to ensure its integrity and authenticity.
An FPGA or cSoC without solid design security features is not a good candidate for data security applications. In many projects data security is only a minor concern, but in some applications, such as financial payment and military communication systems, data security is paramount. Data security requirements are becoming more prevalent with the explosion of the ‘Internet of Things,’ i.e., ‘things’ other than conventional personal computers and servers attached to the Internet. These include industrial control sensors and systems, medical devices, smart electric meters and appliances, and a myriad of wireless devices.
Tampering can occur at the device level or system level. This is where an attacker tries to gain some advantage by finding out information to which he shouldn’t have access. This may be by eavesdropping on signals that were intended to be confidential, to more active attacks involving changing signals or the state of a system by various means. Often the goal of attackers is to reverse-engineer the system so that they can find other security weaknesses, or to accelerate their own development of similar systems.
Network attacks rightly get a lot of attention since poorly designed protocols or network protections such as firewalls may allow an adversary to steal information from the comfort of an Internet café half-way around the world. We read almost every day about thousands of poorly protected credit card numbers or account login credentials being stolen from Web servers. However, threats may be closer to home, such as a crooked contract manufacturer or a company insider stealing the company’s IP assets.
Finally, there are attacks that get right down to the hardware level, and try to steal the IP right out of a device. This may involve first extracting cryptographic keys from the device that are used to protect the IP, perhaps together with capturing a new configuration file being downloaded to a device in the field. Extracting processor firmware and reverse engineering it is another real threat.
Attacks only get better with time. Security that was considered good enough for many applications a few years ago may now be considered too vulnerable to use in the same applications today.
An example of attack at the device level that is becoming more prevalent and less expensive to perform is ‘differential power analysis’ (DPA). In it, the power supply consumption of an integrated circuit like a microcontroller or FPGA is monitored while it performs cryptographic operations on data using a secret key. It is assumed that the adversary has access to the encrypted form of the data—or else why encrypt it in the first place? With that data and enough power supply measurements collected during the time the key was being used, the adversary can compute the value of the key even without depackaging the chip. Depending on the rate of information leakage via the power supply and the sophistication of the attack, keys may be extracted in timeframes ranging from days down to milliseconds.
Without countermeasures, all electronic systems performing cryptography are subject to DPA-type attacks. These include most of the microcontrollers and all the FPGAs currently on the market. Reports published over the last several years have demonstrated attacks where the keys used to load encrypted bitstreams into several brands of FPGAs were extracted using DPA. With the keys, it is usually possible then, and in some cases easy, to decrypt the bitstream. It is believed that every FPGA on the market as of mid-2012 was vulnerable to differential power analysis.
Other types of hardware tampering are more invasive. In some, called semi-invasive, only the package is removed. So the device can emit or respond to light but the silicon device is not mechanically disturbed. Even more invasive are attacks where the miniscule traces on an IC are cut or probed, or where the device is de-processed layer-by-layer to determine how it works. Generally, as an attack becomes more invasive, it is more expensive to mount, requiring expensive equipment like electron microscopes and focused ion beam machines. These attacks are also harder to defend against.
There are two main technologies used for commercial FPGAs today: SRAM and Flash. In addition, anti-fuse FPGAs are still used in space applications, which require enhanced tolerance to background radiation.
SRAM FPGAs, including those from Xilinx and Altera, hold device configuration bits in volatile SRAM cells. These bits determine the logic functions that the FPGA performs. However, since the configuration memory is volatile, it forgets the configuration whenever power is removed, and must be reloaded from external sources after each power-up. In low-end devices, the configuration bitstream moves from the external non-volatile memory where it is held, to the SRAM FPGA, in unencrypted form. This provides virtually no security, as anyone with a storage scope or logic analyser can easily record the data. Also, it is often trivially easy to copy the external memory chip to create clones of the system.
To address some of these weaknesses, higher-end SRAM FPGAs utilise encryption of the configuration bitstream. In these, if the option is used, the bitstream is stored in encrypted form in the external non-volatile memory, and decrypted on the fly by the FPGA upon loading each time. For this to work, at least the decryption key needs to be stored in non-volatile memory in the predominantly volatile FPGA. This is sometimes done using a few bits of one-time programmable anti-fuse memory, or by using a battery to keep part of the chip alive at all times. Of course, if the battery goes dead or is disconnected even for a few milliseconds, the key is forgotten and the system won’t boot up.
In flash-based FPGAs, configuration bits are held in non-volatile flash memory. Thus the configuration needs to be loaded only once—typically during the board assembly process. However, since flash memory is also reprogrammable, flash FPGAs can be reconfigured multiple times over their lifetime, if desired.
Flash-based cSoC devices also have on-chip embedded non-volatile memory (eNVM) for holding the processor firmware securely on-chip. For secure field upgrades of the FPGA fabric configuration and/or the eNVM array, most flash FPGAs also offer built-in bitstream decryption functionality. Unlike SRAM FPGAs that use decryption on every power-up cycle, flash FPGAs need to exercise this feature only when doing an upgrade. For higher security, these and related features can be permanently disabled.
Trends in advanced security architecture and countermeasures
With attacks continuously improving, FPGA countermeasures also must advance to keep ahead. The threat environment is daunting, but all hope is not lost. No security is absolute, but with proper architecture and design, next-generation FPGAs will be significantly stronger than those currently on the market.
Next-generation FPGAs and cSoCs would incorporate DPA countermeasures for all built-in ‘bitstream’ cryptographic operations. Over time, it is expected that deployment of DPA countermeasures will become the norm in the FPGA industry, as is the case with microcontrollers used in financial applications, set-top boxes and the trusted platform module chips used in computers.
Over the next few years, we can expect to see the protocols used to initially configure and upgrade FPGAs and cSoCs improve to provide more features and improved security. New use models that help reduce costs will become available, especially where security demands are elevated, such as when board and system manufacturing is done by contract manufacturers, or in more sensitive applications such as systems used for national defence or homeland security. Field reprogramming will become more automated, simpler and more secure. In addition, supply chain issues such as counterfeit devices will be largely mitigated by appropriate countermeasures.
End users implementing data security applications will find new features aimed at making their designs more efficient. Implementing DPA countermeasures at the data security application level will also become the norm over time. A nascent industry of specialised third-party IP providers is developing to produce high-quality, proven DPA-resistant soft cryptographic logic and firmware IP.
Some FPGAs and cSoCs will be developed with more highly evolved anti-tamper countermeasures, akin to smartcard microcontrollers. These devices will be self-contained single-chip security modules and highly flexible security processors that can be used where design and data security demands are highest. Coupled with board-level countermeasures, they will be able to satisfy the needs of all but the very highest security applications, without the need for developing an ASIC.
In a nutshell
The threat level for FPGAs and cSoCs will continue to increase as attacks get better. At the same time, more applications are demanding enhanced data security. For example, industrial controls and medical devices that in the past barely considered security are more and more considering it a prime design requirement. This trend is being accelerated by machine-to-machine applications and the ‘Internet of Things.’ Anti-tamper requirements in both defence and commercial applications are on the rise as everyone tries to stem the tide of IP theft and protect his company and country from the potentially disastrous consequences.
FPGAs and cSoCs are evolving to address current and anticipated device and system vulnerabilities. A new breed of devices that incorporate logic flexibility of a traditional FPGA, software flexibility of a microcontroller and security of a smartcard IC will be available within a few device generations. It’s an exciting time for FPGA security!
The author is senior principal for product architect at Microsemi SoC Group