31

# Challenge 31: Implement and break HMAC-SHA1 with an artificial timing leak¶

https://cryptopals.com/sets/4/challenges/31

The psuedocode on Wikipedia should be enough. HMAC is very easy.

Using the web framework of your choosing (Sinatra, web.py, whatever), write a tiny application that has a URL that takes a "file" argument and a "signature" argument, like so:

http://localhost:9000/test?file=foo&signature=46b4ec586117154dacd49d664e5d63fdc88efb51



Have the server generate an HMAC key, and then verify that the "signature" on incoming requests is valid for "file", using the "==" operator to compare the valid MAC for a file with the "signature" parameter (in other words, verify the HMAC the way any normal programmer would verify it).

Write a function, call it "insecure_compare", that implements the == operation by doing byte-at-a-time comparisons with early exit (ie, return false at the first non-matching byte).

In the loop for "insecure_compare", add a 50ms sleep (sleep 50ms after each byte).

Use your "insecure_compare" function to verify the HMACs on incoming requests, and test that the whole contraption works. Return a 500 if the MAC is invalid, and a 200 if it's OK.

Using the timing leak in this application, write a program that discovers the valid MAC for any file.

### Why artificial delays?¶

Early-exit string compares are probably the most common source of cryptographic timing leaks, but they aren't especially easy to exploit. In fact, many timing leaks (for instance, any in C, C++, Ruby, or Python) probably aren't exploitable over a wide-area network at all. To play with attacking real-world timing leaks, you have to start writing low-level timing code. We're keeping things cryptographic in these challenges.

HMAC can be thought of “the proper way of implementing a MAC using a hash function”. Its main interest is that it protects against length-extension attacks which we saw in the previous challenge. And, as the instructions say, it's really easy to implement.

In [1]:
# https://docs.python.org/3/library/hashlib.html
import hashlib

from libmatasano import bxor

def sha1(data):
'''single-call SHA-1
(instead of having several calls then a finalization call)'''
h = hashlib.sha1()
h.update(data)
return h.digest()

def hmac_sha1(data, key):
# see https://datatracker.ietf.org/doc/html/rfc2104

return sha1(
+ sha1(
+ data
)
)

# Testing our implementation against a value I got using the "cryptography" library
assert (
hmac_sha1(b'test message', b'test key')
== b'\xbb?\x1a\xdc\x11~\xa0\xed\x15\x9d\x8ek\xaa\xfb\x9d\xff\xe4\x8caZ'
)


Now let's implement our “Web application”. I am not going to use a Web framework, this would really be overkill. It's not very realistic to upload a file only using a URL anyway, the file should be in the body of the request. Instead, we'll just have an object holding a MAC key and simulating the processing of a request.

In [2]:
import os
from time import sleep
from binascii import hexlify, unhexlify
# https://docs.python.org/3/library/urllib.parse.html
from urllib.parse import urlparse, parse_qs

class Website:
def __init__(self):
self.mac_key = os.urandom(16)

def handle_query(self, url):
parsed_query = parse_qs(urlparse(url).query)

# note the "[0]": function parse_qs maps keys to *lists* of values
# because the HTTP protocol allows the same key to appear several time
# in a query string
file = parsed_query['file'][0]
signature = parsed_query['signature'][0]

sig_bytes = unhexlify(signature)

verify_signature(sig_bytes, file.encode(), self.mac_key)


We have to create this verify_signature function. This is the core of this challenge.

In [3]:
class InvalidSignatureError(Exception):
pass

def verify_signature(signature, data, key):
expected_signature = hmac_sha1(data, key)

for (sig_byte, expected_byte) in zip(signature, expected_signature):
if sig_byte != expected_byte:
# this is the "early exit":
# technically we can reject the signature
# *as soon as* we found one byte that differs.
# This however is what will cause the vulnerability
# that is exploited in this challenge.
raise InvalidSignatureError

# the "artificial delay" of 50 milliseconds we are asked to add
sleep(0.05)

# We don't return anything:
# if we didn't raise an exception it means the signature was valid

In [4]:
# instanciation
website = Website()

# Some quick tests

correct_signature = bytes.decode(hexlify(
hmac_sha1(str.encode('foo'), website.mac_key)
))
# should not raise an error
website.handle_query(f"http://localhost:9000/test?file=foo&signature={correct_signature}")
# should raise an error
wrong_signature = correct_signature[:2] + 'fff' + correct_signature[5:]
try:
website.handle_query(f"http://localhost:9000/test?file=foo&signature={wrong_signature}")
# unexpected
raise Exception('Expected an "InvalidSignatureError"')
except InvalidSignatureError:
# expected
pass


Good, now let's break things.

This challenge is about a timing attack: by observing the time it takes to process a query, the attacker can learn some information she is not supposed to get. Here, because function verify_signature exits as soon as it found a wrong byte, the time it takes to verify a signature will depend on the location of the first wrong byte. This allows the attacker to know how many bytes of the signature she sent are valid, and from this she can recover the correct signature byte-per-byte.

It's probably better to just see it in action for the first byte: we are going to send 255 signatures for validation, one for each possible first byte. All of them will be rejected instantly because their first byte is wrong, except one request because the first byte is valid, so verify_signature will check the next byte instead of exiting immediately. Because we added a sleep instruction, the difference in time between verifying one byte and verifying two bytes is quite significant (50 ms), so it will be easy to notice.

The first thing we need is a function that measure how long it takes to process a request:

In [5]:
def measure_verification_time(signature, website):
start_time = time.perf_counter_ns()
try:
website.handle_query(f"http://localhost:9000/test?file=foo&signature={signature}")
raise Exception('signature was not rejected')
except InvalidSignatureError:
pass
end_time = time.perf_counter_ns()

duration = end_time - start_time
# we return duration in milliseconds
# (time.perf_counter_ns() returns nanoseconds)
return duration//1_000_000

In [6]:
import time

timings_first_byte = [
# see https://docs.python.org/3/library/string.html#formatspec
(
measure_verification_time(f'{first_byte:02x}' + '0'*(15*2), website),
f'{first_byte:02x}',
)
for first_byte in range(256)
]


Let's see for which value validation took the most time:

In [7]:
sorted(timings_first_byte, reverse=True)[:5]

Out[7]:
[(50, '83'), (0, 'ff'), (0, 'fe'), (0, 'fd'), (0, 'fc')]

Each value took less than 1 ms to validate, except one value where it took 50 ms. What do you think the first byte of the correct signature is?

In [8]:
correct_signature[:2]

Out[8]:
'83'

Bingo.

So we found the first byte, now we send 255 more signatures, all of them having 6c as their first byte, and having a different second byte. Again, the one that take the most time to validate is the correct guess for the second byte. By repeating this we can get the correct signature for our file even though we don't have the key.

In [9]:
def recover_next_signature_byte(website, already_recovered):
timings = list()
for candidate_byte in range(256):
candidate_signature = (
+ f'{candidate_byte:02x}'
+ '0'*(16*2 - len(already_recovered) - 2)
)

duration = measure_verification_time(candidate_signature, website)

if timings:
# mean timing for *other* bytes
mean = sum(x[0] for x in timings) / len(timings)

if duration > mean + 30:
# this one took much longer,
# (at least 30 milseconds more than average)
# that's probably the correct byte,
# so no need to go further
recovered_byte = candidate_byte
break

timings.append((duration, candidate_byte))
else:
# in Python, the "else" block of a "for" loop
# is executed if the for loop was not exited with a "break"
longuest = sorted(timings, reverse=True)[0]
recovered_byte = longuest[1]

return f'{recovered_byte:02x}'

In [10]:
print('EXPECTED SIGNATURE:', correct_signature)

recovered_signature = str()
for _ in range(16):
next_byte = recover_next_signature_byte(website, recovered_signature)
recovered_signature += next_byte
print(next_byte)

EXPECTED SIGNATURE: 8372506d3a4b11d8c8b02ea9695457a110138849
83
72
50
6d
3a
4b
11

---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
3 recovered_signature = str()
4 for _ in range(16):
----> 5     next_byte = recover_next_signature_byte(website, recovered_signature)
6     recovered_signature += next_byte
7     print(next_byte)

8         )
9
---> 10         duration = measure_verification_time(candidate_signature, website)
11
12         if timings:

<ipython-input-5-6738c2b3f610> in measure_verification_time(signature, website)
2     start_time = time.perf_counter_ns()
3     try:
----> 4         website.handle_query(f"http://localhost:9000/test?file=foo&signature={signature}")
5         raise Exception('signature was not rejected')
6     except InvalidSignatureError:

<ipython-input-2-592e1f0cc696> in handle_query(self, url)
20         sig_bytes = unhexlify(signature)
21
---> 22         verify_signature(sig_bytes, file.encode(), self.mac_key)

<ipython-input-3-930895a813c8> in verify_signature(signature, data, key)
15
16         # the "artificial delay" of 50 milliseconds we are asked to add
---> 17         sleep(0.05)
18
19     # We don't return anything:

KeyboardInterrupt: 

I interrupted it because it takes forever, but we can see that it works.

Why does it take so long? Well to recover one byte we have to try a large number of signatures (at most 255), and because of the “artificial delay” we added, verifying one signature takes quite some time (50 ms per byte until the first wrong byte).

As a result, say that we have already recovered 5 bytes and that we are tring to recover the 6th one: we have to send 255 signatures, each taking 5x50 ms to verify (except one that takes 6x50 ms, but we can neglect this), so a total of about 64 seconds, more than a minute. We could make this time lower by reducing the “artificial delay” but then we would increase the chances of errors in the recovery.

Actually I added a small optimization in my function recover_next_signature_byte so that we don't have to try 255 different signatures each time: as soon as we notice one signature for which verification takes significantly longer than average, we select it as the correct byte. This saves a lot of time when the correct byte has a low value, but if it's 0xff we'll still have to try every single byte before we get the correct one.

## Takeaways¶

I want to stress a few important things: this challenge does not rely on the fact that SHA1 is broken. We could use any non-broken cryptographic hash function, for instance SHA2, and it would still work. The challenge doesn't rely either on the fact that you can do length-extension attacks on SHA1 and SHA2. We could use hash functions that are not subject to length extension attacks like SHA3 or BLAKE3 and the challenge would still work. HMAC prevents length-extension attacks anyway.

So what does this challenge relies on? On the fact that using “normal” comparison to check a signature is not safe. Usually, when you want to test if two values are equal, you will do it in the quickest possible way, and it makes sense to exit at the first byte you find that is different. But when it comes to verifying signature, doing so is not safe, and you should use constant-time comparison instead. Python provides hmac.compare_digest, but you should probably use the cryptography third-party library instead. In Go you have the crypto/subtle module, etc ...