Friday, 01 June 2012 11:13

Microsemi Responds To Claims Of a Backdoor In Their FPGA Products

Written by

Reading time is around minutes.

broken-lockSo two days ago, we reported on a potential issue with some FPGA (Field Programmable Gate Array) ASICs from Microsemi/Actel; namely the ProASIC3. The issue was discovered by a group of researchers that were looking into a potential security risk with these programmable components. What they claimed to have found was a hidden backdoor that had its own key set which could allow for access into the chip for readback, re-programming and potentially wiping the instructions from the chip itself. You can read the original article here if you have not already

To find this the researchers used a new technique called Pipeline Emission Analysis (PEA). PEA is different from Differential Power Analysis (DPA) in that it can detect variations in operation that DPA will normally miss. Without boring you when scanning silicon with DPA the signal to noise ration makes understanding some commands impossible it is like listening to music with bad speakers or amplifiers. Individual instrument and voice sounds become indistinct from each other and background noise. With PEA the signal to noise ratio is much higher meaning that you are able hear each component of the music. Here is an excerpt from the original paper that explains the backdoor. (you can read the entire paper here)

In the next set of experiments we used PEA technology (described in our

paper [21]) to achieve better signal-to-noise ratio (SNR) in an attempt to better

understand the functionality of each unknown command. Some operations were

found to have robust silicon level DPA countermeasures. For example, the Passkey

is documented as another layer of security protection on top of the AES encryption

in PA3 to prevent IP cloning. Some DPA countermeasures found in the Passkey

protection include very good compensation of any EM leakage and broadband

spectrum spreading of side-channel emissions for the bit comparison leakage;

internal unstable clock; high noise resulting in SNR of at the best –20 dB. The first

generation of the sensor is presented in Figure 3a while the second generation is in

Figure 3b. In the end we used a silicon scanning technique based on PEA pioneered

by our project sponsor, combined with a classic DPA setup (resistor in power line,

differential probe, oscilloscope, PC with MatLab). Nevertheless scanning for a

backdoor was not a simple process.


Ok so now we have some background on what the researchers were doing and how they found the backdoor. Today we received a reply from Microsemi that denies that there was anything there.

“Microsemi can confirm that there is no designed feature that would enable to circumvention of the user security.

The researchers assertion is that with the discovery of a security key, a hacker can gain access to a privileged internal test facility typically reserved for initial factory testing and failure analysis. Microsemi verifies that the internal test facility is disabled in all shipped devices. The internal test mode can only be entered in a customer-programmed device when the customer supplies their passcode, thus preventing unauthorized access by Microsemi or anyone else.”

Sounds good right? Well the issue here is that it looks like the Microsemi/Actel part does have those features left in and that are still accessible.

“At this point we went back to those JTAG registers which were non-updatable as

well as FROW to check whether we could change their values. Once the backdoor

feature was unlocked, many of these registers became volatile and the FROW was

reprogrammable as a normal Flash memory. Actel has a strong claim that

'configuration files cannot be read back via JTAG or any other method' in the PA3

and in their other latest generation Flash FPGAs [18]. Hence, they claim, they are

extremely secure because the readback access is not implemented. We discovered

that in fact Actel did implement such an access, with a special key used for

activation.”


The Microsemi PDF does not address this issue at all. They only claim that they disable their debug access before it gets to the customer. Of course the method for deactivation is not listed (as it should be), but if the deactivation is through the use a randomized key then it is still possible that someone can gain access to it using the methods described in the original paper. In fact Microsemi states very clearly that they cannot confirm or deny the claims laid out in the research paper.

If this is the case how can they make a claim that there is no designed feature that would allow access? They do acknowledge that the FPGA’s (ProASIC3) used in testing were originally designed in 2002 and then released in 2005 at a time before many were thinking about this type of access and before some of the common tools used to scan for access were developed. The ProASIC3 were not designed to resist DPA or PEA, but Microsemi has licensed technology from the developers of DPA (Cryptography Research Inc. so that they can properly program counter measures to the use of DPA into their next generation FPGA products.

What we are seeing here is Microsemi stating clearly that they did not design any accessible features into the ProASIC3, but that once they ship them to a customer for programing all bets are off. When the ProASIC3s are programed before resale or installation it is possible that someone could program in access like the type found. “The ProASIC3 FPGAs involved were designed in 2002 and released to the public in 2005, and contain several levels of security settings. The level of setting involved is determined by the actual customer who programs the FPGA.” So basically they are saying we design our devices with security in mind, but cannot force this on the companies that use them in their products. It is the equivalent of buying a house with a security system, but leaving it off all the time. The security is there, but not used.

This means that it is entirely possible that someone could implement a backdoor using the ProASIC3 or similar FPGA simply by programing in the functionality for access using the built in security tools available in the design. Microsemi, while not blameless here, cannot be held responsible for what someone does or does not do with the product during the programming phase. It does sound like they are working on ways to prevent the type of deep scan used to find the different security features in their FPGAs, but that will not help with all of the ones currently on the market.

This one is sure to have more to the story and we will cover that as it all comes out.
Discuss this in our Forum

Read 5721 times Last modified on Friday, 01 June 2012 18:37

Leave a comment

Make sure you enter all the required information, indicated by an asterisk (*). HTML code is not allowed.