Professional Documents
Culture Documents
Cryptographic Techniques in The Design of Firmware
Cryptographic Techniques in The Design of Firmware
Equally a fact of life is the possibility that the device finds itself the target of
malicious activity. Principles of cryptography often play an important part in the
protection of a device against attackers when considering the design of its firmware.
This article will discuss four examples of how embedded processing can be made
more secure using cryptographic techniques: performing over-the-air (OTA) updates
safely; encrypted communication on a host network; secure key sharing; and
authentication.
After development and mass production, OEMs require the ability to update the
firmware within a deployed device to correct bugs and add new product features. The
most practical way to perform such an update, especially when a large number of
devices must be updated, is for each device to download the new firmware over-the-
air, most commonly from a server on the internet.
With communication on the internet, data integrity and authenticity are widespread
concerns. Potential hazards like man-in-the-middle (MITM) attacks make it essential
to verify that a device actually downloads what it thinks it has. If it downloads and
installs what an attacker is passing off as legitimate firmware, then the attacker has
successfully hijacked the device.
Thus, the device must be able to ascertain that the downloaded firmware image is
from a trusted source, and that it’s exactly the one provided by that source and not
altered in any way. One common approach is to use a digital signature. Digital
signing and signature verification are cryptographic algorithms used to verify the
authenticity of data, in this case a firmware image file.
Keeping the technical details brief, the digital signature is generated by computing
a hash (or digest) over the entire image file and encrypting this hash with the private
key. The downloaded signature is verified by decrypting it with the public key and
comparing the result with a hash computed over the entire downloaded image file.
Signature verification is successful if these two items compared are equal.
The public key resides in the device for the purpose of performing such a signature
verification whenever needed. Obviously because any file digitally signed by the
private key would authenticate as being legitimate, the private key must be kept
secure in a safe place so that it doesn't fall into the wrong hands.
We therefore have a means for the device to prove that the firmware image it’s
downloaded is identical to the one from our trusted source and thus safe to install, or
it’s a fake and must certainly not be installed. Without such a mechanism, OTA
updates could be vulnerable to hackers seeking to introduce their own code into the
embedded device (Fig. 1).
1. A safe over-the-air firmware update implemented by digitally signing and verifying
the firmware image.
The device and all connecting peers should encrypt their communication with one
another. Plaintext traffic can be easily analyzed by hackers, either to steal sensitive
information or understand how to mimic the communication. By doing the latter, a
hacker could send malicious commands to control the device and the device would
respond to them, believing that they’re coming from a legitimate peer.
A basic scheme like this has the advantages of simplicity and speed. It’s
straightforward to implement in firmware. Plus, AES is fast and requires relatively
low computing power, making it an attractive option for embedded devices.
However, its main weakness is susceptibility to compromise. Because the same key is
pre-installed to the device and to all legitimate peers—and it doesn’t change—hackers
are afforded more opportunity to compromise it. They have more bearers of the key
to target, and no time limit to discover the key.
One improvement would be to change the key periodically. Then even if a hacker
succeeds in obtaining the key, it will not permit interaction with the device or
decryption of future traffic if it’s expired and a new key has become active.
But when a key change occurs, how can the new key be communicated safely among
all of the peers? Remember, they’re on a public network where eavesdropping is a
major concern. The answer lies once again in introducing techniques associated
with asymmetric cryptography. It will be relevant to mention that asymmetric
algorithms are relatively more complex, and therefore require more computing
power than their symmetric counterparts.
The benefit of using both symmetric and asymmetric algorithms together is that it
enables the implementation to take advantage of each one’s strengths, while at the
same time not being adversely impacted by its limitations. Protection of data traffic
needs to happen continuously. Therefore, it makes sense to encrypt and decrypt
symmetrically to be power-efficient.
The encryption key should be changed periodically because a static key can be
vulnerable, but no symmetric method is available to do it safely. An asymmetric
approach is available; however, it’s not very power-efficient. Fortunately, this would
be an occasional event, and its power-efficiency would not be critical. Consequently,
it’s appropriate to change the key asymmetrically.
The remainder of this article elaborates on the limitation of a system that secures
data traffic only with a symmetric cipher, and on how the key sharing mechanism
would work in an improved design that uses an asymmetric approach. In addition, it
will be demonstrated that an asymmetric algorithm can be used for authentication to
guard against types of attacks wherein a hacker seeks to impersonate the embedded
device or a communicating peer.
For example, if your device will be talking to peers on a TCP/IP network over Wi-Fi,
then very likely you should be using Transport Layer Security (TLS) to secure it. If
your device will communicate over Bluetooth Low Energy, then LE Secure
Connections could be the best approach, at least at the time of this writing.
Earlier, we mentioned that a symmetric encryption algorithm like AES is ideal for
protecting embedded-device communications because it computes quickly and
requires relatively low computing power. This is particularly important for battery-
powered devices that must be able to run for a long time before needing a recharge or
battery replacement.
We also identified the shortcoming, however, that with a simple method employing
only an algorithm like AES, both communicating nodes use the same unchanging
secret key, which can suggest a possible vulnerability to hackers. If a hacker is able to
discover this secret key, it will then be possible for him/her to decrypt all captured
communications encrypted with it—past, present, and future—as well as interact
maliciously with the compromised node. This would clearly be a serious issue and
potentially disastrous depending on the sensitivity of data transferred between the
device and a legitimate peer, or the ramifications of a hacker being able to interact
with them.
The ability to change the secret key periodically would therefore be very
advantageous. If one key did become compromised, the hacker would be able to
decrypt only the network traffic encrypted with that one key before the next key
became active, as well as interact with the device only until that next key became
active.
But how can the secret key be changed safely over the communication network? It’s
the very network which we suspect the hackers to be eavesdropping on, and which
we’re aiming to protect by encrypting. Transmitting a new key in plaintext over the
network would be problematic, as it would be blatantly visible to the eavesdropping
hacker. Transmitting a new key encrypted with the previous key would be equally
problematic, because it raises the question, "How does the very first key get
transmitted when the two nodes first begin communicating?" Furthermore, the
whole reason for activating a new key is because we suspect that the previous key
may have been compromised.
RSA encryption, for example, which is widely used to secure internet traffic, operates
with the plaintext message being the input to such a trapdoor function, and the
ciphertext being the output from it. That is, the (forward) encryption is easy, whereas
the (inverse) decryption is extremely difficult if we don't have the private key (the
piece of secret information).
Secure key sharing can be achieved by taking advantage of this property in another
trapdoor one-way function, to perform something called a Diffie-Hellman key
exchange. Each of the two nodes performing such a key exchange inputs its own
private number as well as a common publicly shared number into the trapdoor
function, and publicly transfers its output result to the other node.
This particular trapdoor function has an additional special property: When each
node computes the function a second time, with the output result received from the
other node as input in place of the common publicly shared number, the result of this
second computation for both nodes is the same. This common result is the new secret
key.
The mechanism amounts to secure key sharing because the new secret key itself isn’t
actually shared over the network for an eavesdropper to see (Fig. 2). The two nodes’
private numbers would be needed to derive the new key. The trapdoor function
outputs that are shared over the network are of no use to an eavesdropper for
computing those private numbers, due to the one-way nature of the trapdoor
function (i.e., extreme difficulty to compute in the inverse direction).
2. Secure key sharing by using a trapdoor one-way function to perform a Diffie-
Hellman key exchange.
At any time and with any frequency of our choosing, we have a means for the
embedded device and any peer it needs to communicate with to agree on a new secret
key in a safe way. This new key can subsequently be used for symmetric encryption
and decryption until the key changes again. If any secret key is discovered, then the
amount of transferred data a hacker can hope to decrypt with it will depend on how
frequently this new key agreement mechanism is invoked.
Authentication
Although we now have a method for securely sharing secret keys, this only helps to
protect our embedded device from breaches in which a key used for communication
with a legitimate peer is somehow discovered. We also must address the possibility
that a hacker may impersonate the embedded device to the legitimate peer or vice
versa and seek to obtain its own key!
To do so, the impersonator would carry out a successful key agreement as described
in the preceding section, and then have fully encrypted conversations with the device
or peer without the knowledge that it’s actually conversing with an attacker. Note
that if the hacker impersonates each to the other at the same time, it’s known as a
MITM attack.
If the hacker knows how to participate in the Diffie-Hellman key exchange, then
there’s nothing in the key exchange by itself to prevent the hacker from succeeding.
For that, we need the help of another mechanism from asymmetric cryptography,
known as authentication.
A separate RSA key pair should be generated for each embedded device and its
legitimate peer. The device's private key is installed in the device and its public key in
the peer. For safety, this should be done over a trusted communication channel and
not the network that a hacker could access. Typically, it would happen at production
time, or when the device is first deployed in the field. The trusted communication
channel could be, for example, USB to a locally connected laptop computer.
Similarly, the peer's private key is installed in the peer and its public key in the
device, also over a trusted channel.
When the device sends its messages to the peer during the key agreement process, it
signs these messages using its private key. When the peer receives the messages, it
verifies the accompanying signature using the device's public key, thereby
authenticating that the messages were in fact sent by the device. Similarly, the peer
signs its key agreement messages using its private key, and the device authenticates
the peer using the peer's public key. More specifically, digital signing employs a
trapdoor one-way function like the RSA encryption function mentioned
previously (Fig. 3).
The hacker doesn’t know the private key of either the embedded device or its
legitimate peer. Furthermore, the hacker can’t derive either private key from
signatures captured eavesdropping on the key exchange messages sent between the
two. That’s because, once again, the digital signing trapdoor function is extremely
difficult to compute in the inverse direction. Without either private key, the hacker is
unable to sign its own key exchange messages in a manner that the device or the peer
will find acceptable, and therefore can’t impersonate either one to carry out a
malicious key exchange successfully.