\iffalse
INSTRUCTIONS: (if this is not lecture1.tex, use the right file name)
Clip out the ********* INSERT HERE ********* bits below and insert
appropriate TeX code. Once you are done with your file, run
``latex lecture1.tex''
from a UNIX prompt. If your LaTeX code is clean, the latex will exit
back to a prompt. Once this is done, run
``dvips lecture1.dvi''
which should print your file to the nearest printer. There will be
residual files called lecture1.log, lecture1.aux, and lecture1.dvi.
All these can be deleted, but do not delete lecture1.tex.
\fi
%
\documentclass[11pt]{article}
\usepackage{amsfonts}
\usepackage{latexsym}
\setlength{\oddsidemargin}{.25in}
\setlength{\evensidemargin}{.25in}
\setlength{\textwidth}{6in}
\setlength{\topmargin}{-0.4in}
\setlength{\textheight}{8.5in}
\input{preamble}
%end of macros
\begin{document}
\lecture{1}{One-Time MACs, (XOR)Universal hashing, Weak Keys}{January 10, 2013}{Eric Miles}
In today's lecture we study one-time message authentication codes (MACs) which are secure in an information-theoretic sense. We will see that, compared to information-theoretically secure encryption, significantly better parameters can be achieved. We will also study such MACs in the setting of imperfect randomness, i.e.\ when the secret key is not drawn from the uniform distribution but rather is only guaranteed to have some min-entropy.
\medskip
\section{Class Organization}
Before starting the technical material, we make two remarks on the class itself.
First, all registered students will be expected to scribe roughly two lectures, and all visitors are encouraged to scribe one lecture.
Second, throughout the lectures there will be a number of problems to be solved outside of class time, with varying levels of difficulty. {\em Exercises} are simple problems which will usually just involve a routine calculation. {\em Questions} will require slightly more work, but will still be on the easier end of the spectrum. At the other end of the spectrum we have {\em projects}, which will be more open-ended and for which the solution may not be known. Finally {\em quesjects} will be somewhere in between the latter two, still requiring work but with a somewhat clearer path to a solution.
\section{One-time MACs}
We start by defining a (one-time) {\em message authentication code} (MAC). The setting is the following: we have two parties, $A$(lice) and $B$(ob), who share a secret key $r \in \zo^m$. $A$ wants to send a message $x \in \zo^n$ to $B$ along with a tag $t \in \zo^\lambda$ that allows $B$ to verify that the message came from $A$. To do so, they use a function $\Tag : \zo^m \times \zo^n \to \zo^\lambda$; specifically, $A$ sends $x$ and $t := \Tag(r,x)$, and $B$ receives $(x',t')$ and verifies that $t' = \Tag(x',r)$. Throughout, we will use $\Tag_r$ to denote the function $\Tag(r,\cdot)$.
In general one can (and does) consider randomized MACs, but today we will only consider deterministic MACs. As a result, the correctness property, namely that $B$ will always accept a message with a valid tag, is immediate.
To define the security of a MAC, consider the following game $G_r$ parameterized by $r \in \zo^m$. There are two players: a challenger $C$ who receives $r$ as input, and an adversary $E$(ve) who receives no input. $G_r$ has the following three steps.
\begin{enumerate}
\item $E$ chooses $x \in \zo^n$ and sends $x$ to $C$.
\item $C$ computes and sends $t := \Tag_r(x)$ to $E$.
\item $E$ outputs $(x',t') \in \zo^n \times \zo^\lambda$.
\end{enumerate}
We say that $E$ {\em wins} $G_r$ if $x' \neq x$ and $\Tag_r(x') = t'$, and write $\Adv_E(r) := \Pr[E$ wins $ G_r]$ to denote $E$'s {\em advantage}. In general we write $\Adv^{G_r}_E(r)$ if we need to specify the game.
Our goal in this lecture is to obtain an efficient function $\Tag$ such that, for every computationally unbounded adversary $E$, $\Adv_E(r)$ is negligible in $\lambda$. Clearly if $r$ is fixed, this is impossible as we can consider an $E$ that has $r$ hardwired. Thus, the following security definition considers secret keys $r$ that are chosen probabilistically.
Here and throughout the lecture notes, $U_m$ denotes the uniform distribution on $\zo^m$. We will use capital letters to denote random variables and/or the distributions from which they are sampled, and lower-case letters to denote specific values.
\begin{definition}
Let $R$ be a distribution on $\zo^m$ and $\delta > 0$. A function $\Tag : \zo^m \times \zo^n \to \zo^\lambda$ is a {\em $(R,\delta)$-secure one-time MAC} if for every $E$,
$$\mathbb{E}_{r \leftarrow R} [\Adv_E(r)] \leq \delta,$$
When $R \equiv U_m$, we simply say {\em $\delta$-secure}.
\end{definition}
This definition captures what we intuitively want from a (one-time) MAC, because any eavesdropper $E$ who overhears one message from $A$ to $B$ does not gain enough information to then forge any message to $B$ from $A$.
\medskip
In constructing MACs, there are two general goals. The first is to minimize the tag length $\lambda$ and the error $\delta$ for given key and message lengths $m,n$. The second, more common goal is to minimize the tag and key lengths $\lambda,m$ for a given message length $n$ and error $\delta$.
We will construct MACs from a certain type of hash functions, defined next.
\begin{definition}
Let $n,\lambda,p \in \mathbb{N}$ and $\delta > 0$. A family of functions $H = \{h_a : \zo^n \to \zo^\lambda\ |\ a \in \zo^p \}$ is {\em $\delta$-almost XOR-universal ($\delta$-AXU)} if $\forall x \neq x' \in \zo^m, y \in \zo^\lambda$:
\begin{equation}
\Pr_{A \leftarrow U_p} [h_A(x) \oplus h_A(x') = y] \leq \delta. \label{eqn-hash}
\end{equation}
If $\delta = 2^{-\lambda}$ (which is optimal when $\lambda < n$), we say $H$ is {\em XOR-universal (XU)}.
If (\ref{eqn-hash}) holds only for $y = 0^n$, namely
\begin{equation}
\Pr_{A \leftarrow U_p} [h_A(x) = h_A(x')] \leq \delta. \label{eqn-hash1}
\end{equation}
we say {\em $\delta$-almost universal ($\delta$-AU)} (or {\em universal} when $\delta=2^{-\lambda}$), respectively.
\end{definition}
\subsection{Constructing MACs from $\delta$-AXU functions}
Before constructing $\delta$-AXU functions, we show how to construct MACs from them.
\begin{theorem}
\label{thm-hash-to-mac}
Let $H = \{h_a : \zo^n \to \zo^\lambda\, |\, a \in \zo^p\}$ be a $\delta$-AXU function family. Then, parsing $r = (a,b) \in \zo^p \times \zo^\lambda$, the function $$\Tag_r(x) := h_a(x) \oplus b$$ is a $\delta$-secure one-time MAC with key length $m = p+\lambda$.
\end{theorem}
\begin{exercise}
Find a counterexample to Theorem 1 if instead $\Tag_r(x) := h_a(x)$ (i.e.\ if $\oplus\, b$ is omitted).
\end{exercise}
Recall the game $G_r$ that defines the security of a given MAC $\Tag_r$. We now prove Theorem \ref{thm-hash-to-mac} by defining another game $G'_r$ with the following properties: first, any adversary with advantage $\eps$ in $G_r$ implies the existence of an adversary with advantage $\eps$ in $G'_r$; second, every adversary has advantage bounded by $\delta$ in $G'_r$ when $H$ is $\delta$-AXU. (In fact, as a syntactic convienience we will define two games $G'_r,G''_r$ with these properties.)
\begin{proof}
We first restate the game $G_R$ when $R \leftarrow U_m$. Throughout the proof, we assume wlog that the adversary $E$ is deterministic and computationally unbounded, and also that $E$ never outputs $X'=X$ (as then she loses the game for sure).
\medskip
$G_R :=$
\begin{enumerate}
{\setlength\itemindent{1cm}
\item $E$ chooses and sends $X \in \zo^n$ to $C$.
\item $C$ samples $(A,B) \leftarrow U_p \times U_\lambda$ and sends $T = \Tag_{(A,B)}(X) = h_A(X) \oplus B$ to $E$.
\item $E$ computes and outputs $(X',T')$, and wins if $X \neq X'$ and $\Tag_{(A,B)}(X') = T'$.
}
\end{enumerate}
We define the game $G'_R$ to be the same as $G_R$ with the following change to step 3: $E$ computes $X',T'$ but instead outputs $(X', Y = T' \oplus T)$, and wins if $h_A(X) \oplus h_A(X') = Y$.
Notice, this is only a syntactic change, since
$$h_A(X') \oplus B = T' \mbox{~iff~} (h_A(X') \oplus B) \oplus (h_A(X) \oplus B) = T' \oplus T \mbox{~iff~} h_A(X) \oplus h_A(X') = Y$$
Hence, clearly we have $$\max_E \left( \Adv^{G_R}_E(R) \right) = \max_E \left( \Adv^{G'_R}_E(R) \right)$$ when $R \leftarrow U_m$.
We now define a third game $G''_R$, which is only different from $G'_R$ in the way the tag $T$ is computed.
\medskip
$G''_R :=$
\begin{enumerate}
{\setlength\itemindent{1cm}
\item $E$ chooses and sends $X \in \zo^n$ to $C$.
\item $C$ samples $T \leftarrow U_\lambda$ and sends it to $E$.
\item $E$ computes and outputs $(X',Y)$.
\item $C$ samples $A \leftarrow U_p$, and $E$ wins if $h_A(X) \oplus h_A(X') = Y$.
}
\end{enumerate}
Notice, because $B$ is sampled uniformly in game $G'_R$, we have that $T$ is distributed uniformly in both $G'_R$ and $G''_R$. Moreover, $T$ is independent from $A$, which justifies the ``delayed'' sampling of $A$ in step 4 of game $G''_R$. Thus the changes from $G'_R$ to $G''_R$ preserve the distribution of each random variable, and we have
$$\max_E \left( \Adv^{G'_R}_E(R) \right) = \max_E \left( \Adv^{G''_R}_E(R) \right).$$
Furthermore, because $A$ is sampled at random {\em after} $X\neq X'$ and $Y$ are defined,
the fact that $H$ is $\delta$-AXU implies that
$$\max_E \left( \Adv^{G''_R}_E(R) \right) \leq \delta$$
which implies the theorem.
\end{proof}
Note the importance of sampling $B$ uniformly at random in the proof of this theorem.
\subsection{Constructing $\delta$-AXU functions}
We now turn to constructing $\delta$-AXU function families $H = \{h_a : \zo^n \to \zo^\lambda\, |\, a \in \zo^p\}$. But first, we note the following lower bounds on the key size $p$.
\begin{center}
\begin{tabular}{|c | l|}
\hline
if $H$ is... & then... \\ \hline \hline
XU & $p \geq n$ \\ \hline
$\delta$-AXU & $p \geq \logdel + \log(n/\lambda)$ \\ \hline
universal & $p \geq n - \lambda$ \\ \hline
$\delta$-AU & $p \geq \logdel + \log((n-\lambda)/\lambda)$ \\ \hline
\end{tabular}
\end{center}
The first construction we consider is trivially XU, but has very poor key length $p = n \lambda$.
\begin{construction}
Let the key $a \in \zo^{\lambda \times n}$ be a matrix, and define $h_a(x) := a \cdot x$.
\end{construction}
We now observe that by instead letting $a$ come from the set of so-called Hankel matrices, we can save on the key length.
\begin{definition}
A matrix $a \in \zo^{\lambda \times n}$ is a {\em Hankel matrix} if each reverse diagonal is constant. That is, for each $2 \leq i \leq \lambda$ and each $1 \leq j \leq n-1$, $a_{i,j} = a_{i-1,j+1}$.
\end{definition}
\begin{construction}
\label{const-hankel}
Let the key $a \in \zo^{\lambda \times n}$ be a Hankel matrix, and define $h_a(x) := a \cdot x$.
\end{construction}
\begin{question}
Prove that Construction \ref{const-hankel} is XU.
\end{question}
Note that a Hankel matrix is specified by giving a single bit for each of the $n+\lambda-1$ reverse diagonals. Thus we have $p = n + \lambda - 1$, which is $< 2n$ when $\lambda \leq n$ and thus within a constant factor of the XU lower bound.
The next construction uses finite fields and achieves $p = n$, matching the XU lower bound. We assume some implicit bijection between $\zo^n$ and the finite field $GF(2^n)$ defined by an irreducible $GF(2)$-polynomial of degree $n$.
\begin{construction}
\label{const-fields}
Let the key $a \in GF(2^n)$, and define $h_a(x)$ to be the lower-order $\lambda$ bits of $a \cdot x$ (where multiplication is in $GF(2^n)$).
\end{construction}
\begin{question}
Prove that Construction \ref{const-fields} is XU.
\end{question}
Our final XU construction achieves the same key length, but uses inner products over the finite field $GF(2^\lambda)$ and will be more convenient to modify later.
\begin{construction}
\label{constr-ip}
Assume that $n = b\lambda$ for some $b \in \mathbb{N}$. Let the key $a = (a_1,\ldots,a_b) \in GF(2^\lambda)^b$. Then parse $x$ as $(x_1,\ldots,x_b) \in GF(2^\lambda)^b$, and define $h_a(x) := \langle a,x \rangle = \sum_i a_i x_i$.
\end{construction}
\begin{lemma}
Construction \ref{constr-ip} is XU.
\end{lemma}
\begin{proof}
Fix $x \neq x' \in GF(2^\lambda)^b$ and $y \in GF(2^\lambda)$. Define $z = x - x' \neq 0^b$. Then we have
$$\Pr_a[h_a(x) \oplus h_a(x') = y] = \Pr_a[\langle a,x \rangle \oplus \langle a,x' \rangle = y] = \Pr_a[\langle a,z \rangle = y]$$
because addition and subtraction in $GF(2^\lambda)$ both correspond to bit-wise $\oplus$. We claim that the latter probability equals $2^{-\lambda}$, which implies the lemma. To see this this, assume wlog that $z_1 \neq 0$, and note that for any setting of $a_2,\ldots,a_b$ we have
$$\Pr_{a_1}[\langle a,z \rangle = y] = \Pr_{a_1}[a_1 = c] = 2^{-\lambda}$$
where $c := (y - \sum_{i \geq 2} a_i z_i) \cdot z_1^{-1} \in GF(2^\lambda)$.
\end{proof}
To achieve only universality as opposed to XOR-universality, we can save $\lambda$ bits in the key (and thus match the lower bound) by fixing $a_1 = 1 \in GF(2^\lambda)$.
\medskip
We now modify Construction \ref{constr-ip} to obtain a $\delta$-AXU family for $\delta < n \cdot 2^{-\lambda}$ while reducing the key length to $\lambda$. This is done by replacing $(a_1,\ldots,a_b)$ with $(a,a^2,\ldots,a^b)$ for a single $a \in GF(2^\lambda)$.
\begin{construction}
\label{constr-axu}
Assume that $n = b\lambda$ for some $b \in \mathbb{N}$. Let the key $a \in GF(2^\lambda)$. Then parse $x$ as $(x_1,\ldots,x_b) \in GF(2^\lambda)^b$, and define $h_a(x) := \sum_i a^i \cdot x_i$.
\end{construction}
\begin{lemma}
Construction \ref{constr-axu} is $(2^{-\lambda} \cdot n/\lambda)$-AXU.
\end{lemma}
\begin{proof}
Fix $x \neq x'$ and $y$ as before, and let $z = x - x' \neq 0^b$. Then, if we define $z_0 := y$, we have
$$\Pr_a[h_a(x) \oplus h_a(x') = y] = \Pr_a\left[\sum_{i=0}^b a^i \cdot z_i = 0\right].$$
Thus $h_a(x)\, \oplus\, h_a(x') = y$ only for those $a$ that are roots of the polynomial $\phi(s) := \sum_{i \leq b} z_i \cdot s^i$. Since $\phi$ is of degree $\leq b$ and thus has $\leq b = n/\lambda$ roots, this implies the lemma.
\end{proof}
Letting $\delta = 2^{-\lambda} \cdot n/\lambda$, we see that Construction \ref{constr-axu} achieves key length $p = \lambda < \log n + \logdel$. (In general one can have constructions that decouple $p$ from $\lambda$, but we will not consider those here.) The following corollary is immediate.
\begin{corollary}
\label{cor-axu}
For every $n$ and $\delta$, there is a $\delta$-AXU family with $p = \lambda = \log(n/\delta)$.
\end{corollary}
\subsection{Putting it together}
Combining the results of the preceding subsections, the following main theorem is proved. Recall that for a MAC, $n$ denotes the message length, $m$ denotes the key length, $\lambda$ denotes the tag length, and $\delta$ denotes the maximum advantage of any adversary.
\begin{theorem}
\label{thm-main-mac}
There exist $\delta$-secure one-time MACs in each of the following parameter regimes.
\begin{enumerate}
\item For any $n$ and $\lambda$, $m = 2\lambda$ and $\delta = n \cdot 2^{-\lambda} = n \cdot 2^{-m/2}$.
\item For any $n$ and $m$, $\lambda = m/2$ and $\delta = n \cdot 2^{-m/2}$.
\item For any $n$ and $\delta$, $m = 2 \log(n/\delta)$ and $\lambda = \log(n/\delta)$.
\end{enumerate}
\end{theorem}
It is interesting to note that if one only cares about message authentication rather than encryption, Shannon's well-known lower bound in the setting of one-time statistical security, namely that key length $\geq$ message length, does not hold.
We remark on the optimality of this MAC construction. First, there is a lower bound by Alon (unpublished) which shows that any MAC must satisfy
\begin{equation}
\label{eqn-mac-lb}
m \geq \log n + 2 \logdel - \log\logdel.
\end{equation}
The construction in Theorem \ref{thm-main-mac} essentially achieves this bound up to the constant factor 2 on $\log n$. Second, a paper by Gemmell and Naor \cite{GN93} notes that the existence of a MAC with $m = \log n + 2 \logdel$ can be proved non-constructively, which again improves on Theorem \ref{thm-main-mac} only by the constant factor $2$ on $\log n$.
\begin{quesject}
Prove either or both of the above bounds, namely the lower bound (\ref{eqn-mac-lb}) and the (non-constructive) MAC that achieves $m = \log n + 2 \logdel$.
\end{quesject}
Before moving on, we note the following two simple lower bounds. First, the tag length $\lambda$ must be at least $\logdel$; this is because an adversary can correctly guess a tag with probability $2^{-\lambda}$. Second, the key length $m\ge 2 \logdel$ (even when $n=1$). We will not prove it now (see next lecture), but the intuition is that when $R \leftarrow U_{2 \logdel}$, $\Tag_R(x)$ has $\logdel$ bits of entropy, and for any $x' \neq x$ the value $\Tag_R(x')$ has $\logdel$ bits of entropy even conditioned on $\Tag_R(x)$. Note that when the message length $n = 1$, this can be achieved by parsing $r = (r_0,r_1) \in \zo^{2 \logdel}$ and defining $\Tag_r(x) = r_x$ where $x\in \zo$.
\section{MACs with imperfect randomness}
We now begin to study a question which we will continue in the next lectures, namely: is it possible to build a MAC from an imperfect source of randomness?
To make sense of this question we must formalize what is meant by ``imperfect''. This is done by defining the notion of the {\em entropy} of a given distribution $R$. There are multiple types of entropy that one can define; the most common form is {\em Shannon entropy}, denoted $\hsha(R)$, which we will not define here. Shannon entropy is typically not the ``right'' notion of entropy for cryptography, because it is possible to define pathological distributions that have high Shannon entropy but are useless to cryptographic algorithms. The main type of entropy that we will consider is {\em min-entropy}, denoted $\hinf(R)$ and defined next. Later we will also consider {\em collision entropy}, which is denoted $\htwo(R)$. For any distribution $R$, these three types of entropy satisfy $\hinf(R) \leq \htwo(R) \leq \hsha(R)$.
\begin{definition}
Let $R$ be a distribution. The {\em predictability} of $R$ is defined by $\Pred(R) := \max_r (\Pr[R = r])$, and the {\em min-entropy} of $R$ is defined by $\hinf(R) := \log(1 / \Pred(R))$. When $\hinf(R) \geq k$ we say that $R$ is a {\em $k$-source}.
\end{definition}
Note that $R$ is a $k$-source if and only if $\Pr[R = r] \leq 2^{-k}$ for every $r$ in the support of $R$. Also, the value $\Pred(R)$ is equal to the maximum, over all computationally unbounded adversaries $E$, of $\Pr_R[E$ guesses $R]$.
With this definition in hand, we now define MACs with imperfect randomness and prove a general transformation from perfect randomness to imperfect randomness.
\begin{definition}
A function $\Tag$ is a {\em $(k,\delta)$-secure one-time MAC} if it is an $(R,\delta)$-secure one-time MAC for all $k$-sources $R$.
\end{definition}
\begin{theorem}
\label{thm-k-source-mac}
If $\Tag$ is a $\delta$-secure MAC with key length $m$, then for every $k \leq m$ it is also a $(k, 2^{m-k} \cdot \delta)$-secure MAC.
\end{theorem}
Informally speaking, a theorem such as this one holds for any cryptographic task which deals with ``unpredictability'' (as opposed to the stronger notion of ``indistinguishability''). Theorem \ref{thm-k-source-mac} follows immediately from the next lemma, where $f(r)=\Adv_E(r)$ is indeed non-negative.
\begin{lemma}
\label{lem-any-to-uniform}
For every function $f : \zo^m \to \mathbb{R}^{\geq 0}$ and every $k$-source $R$ on $\zo^m$, $$\mathbb{E}[f(R)] \leq 2^{m-k} \cdot \mathbb{E}[f(U_m)].$$
\end{lemma}
\begin{proof}
Because $\Pr[R = r] \leq \Pred(R)$ for all $r$ by definition, we have
\begin{eqnarray*}
\mathbb{E}[f(R)] & = & \sum_r \Pr[R = r] \cdot f(r) \\
& \leq & \Pred(R) \cdot 2^m \cdot \sum_r \frac{1}{2^m} \cdot f(r) \\
& = & 2^{m - \hinf(R)} \cdot \mathbb{E}[f(U_m)].
\end{eqnarray*}
Notice, the inequality crucially used the fact that $f\ge 0$. Indeed, the result is wrong for general $f$, as we will see later.
\end{proof}
Combining Theorems \ref{thm-hash-to-mac}, \ref{thm-main-mac}, and \ref{thm-k-source-mac}, we obtain the following.
\begin{theorem}
\label{thm-final}
For any $k$ such that $m/2 + \log n < k \leq m$, the function $\Tag$ defined in Theorem \ref{thm-hash-to-mac} is a $(k, n \cdot 2^{m/2 - k})$-secure MAC with tag length $\lambda = m/2$.
In other words, for every $n$ and $\delta$, every $m \geq 2 \log(n/\delta)$, and every $(m\ge )~ k\ge m/2 + \log(n/\delta)$, there exists a $(k,\delta)$-secure MAC with tag length $\lambda = m/2$.
\end{theorem}
\subsection{Conditional Min-Entropy and Direct Proof of Theorem \ref{thm-final}}
We conclude today's lecture by giving a more direct proof of Theorem \ref{thm-final} that in particular does not use the general transformation of Theorem \ref{thm-k-source-mac}. To do so we need some simple facts about min-entropy, as well as the following notion of conditional min-entropy which comes from \cite{DORS04}.
\begin{definition}
Let $A,B$ be two jointly-distributed random variables, and define $$\Pred(A\, |\, B) := \mathbb{E}_{b \leftarrow B}[\Pred(A\, |\, B = b)] = \max_E\left(\Pr_{A,B}[E(B) = A]\right).$$
Then, the {\em conditional min-entropy} of $A$ given $B$ is $\hinf(A\, |\, B) := \log(1/\Pred(A\, |\, B))$.
\end{definition}
Note that $\hinf(A\, |\, B)$ is {\em not} equivalent to $\mathbb{E}_{b \leftarrow B}[\log(1/\Pred(A\, |\, B = b)]$ (which has the $\mathbb{E}$ and $\log$ switched). This latter definition turns out not to be very useful for cryptography, since $2^{-\hinf(A|B)} = \Pred(A|B) = \max_E(\Pr(E(B)=A)$ measures the best probability $E$ can guess $A$ given $B$.
%\newpage
\begin{lemma}
\label{lem-ent-1}
For every distribution $Z$ and every deterministic function $g$: $\hinf(Z) \geq \hinf(g(Z))$.
\end{lemma}
\begin{proof}
This is equivalent to $\Pred(Z) \leq \Pred(g(Z))$, which holds because applying $g$ to the output of any predictor for $Z$ gives a predictor for $g(Z)$.
\end{proof}
\begin{lemma}
\label{lem-ent-2}
For all distributions $A,B$ with $|$Support$(B)| \leq L$, the following two (equivalent) statements hold.
\begin{center}
\begin{tabular}{ccccccc}
1. $\hinf(A)$ & $\geq$ & $\hinf(A\, |\, B)$ & $\geq$ & $\hinf(A,B) - \log L$ & $\geq$ & $\hinf(A) - \log L.$ \\
&&&&&&\\
2. $\Pred(A)$ & $\leq$ & $\Pred(A\, |\, B)$ & $\leq$ & $L \cdot \Pred(A,B)$ & $\leq$ & $L \cdot \Pred(A).$
\end{tabular}
\end{center}
\end{lemma}
\begin{proof}
The statements are equivalent by definition. The only non-trivial inequality is $\Pred(A\, |\, B) \leq \Pred(A,B) \cdot L$, which we now prove following \cite[Lem.\ 2.2]{DORS04}.
\begin{eqnarray*}
\Pred(A\, |\, B) & = & \mathbb{E}_{b \leftarrow B} [\Pred(A\, |\, B = b)] \\
& = & \sum_b \max_a \left(\Pr[A = a\, |\, B = b]\right) \cdot \Pr[B = b] \\
& = & \sum_b \max_a \left(\Pr[A = a \wedge B = b] \right)\\
& \leq & \sum_b \max_{a,b'} \left(\Pr[A = a \wedge B = b'] \right)\\
& = & L \cdot \max_{a,b'} \left(\Pr[A = a \wedge B = b'] \right) = L \cdot \Pred(A,B).
\end{eqnarray*}
A more ``algorithmic'' way to prove this result is to turn any predictor $E(b)$ for $A$ given $b\leftarrow B$ into a predictor $E'$ for $(A,B)$ as follows. $E'$ samples uniformly random $b\leftarrow$ Support$(B)$, and then runs $a\leftarrow E(b)$, and outputs $(a,b)$. Intuitively, irrespective of the actual distribution $B$, the random sample of $b$ from Support$(B)$ is ``correct'' (call this event $Cor$) with probability at least $1/L$. Moreover, it is easy to see (exercise) that conditioning on $Cor$ does not affect the marginal distribution of ``real'' $(A,B)$. Thus, $\Adv_{E'}(A,B) \ge \frac{1}{L}\cdot \Adv_E(A|B)$.
\end{proof}
We now turn to the direct proof of Theorem \ref{thm-final}. Recall that the MAC we are considering is defined by $$\Tag_{(a,b)}(x) := b + \sum_{i=1}^d x_i \cdot a^i$$ where $a,b \in GF(2^\lambda)$ and $x \in \zo^n$ is parsed as $(x_1,\ldots,x_d) \in GF(2^\lambda)^d$ for $d := n/\lambda$.
Observe that for any $x \neq x' \in GF(2^\lambda)^d$ and any $t,t' \in GF(2^\lambda)$, the system of equations
\begin{align*}
b &+ \sum_{i=1}^d x_i \cdot a^i = t \\
b &+ \sum_{i=1}^d x'_i \cdot a^i = t'
\end{align*}
has $\leq d$ solutions for $a$, and these solutions are the same for each $b \in GF(2^\lambda)$. We define the {\em position} of a given $(a,b)$, denoted $\Pos_{(a,b)}(x,x') \in \{1,\ldots,d\}$, as follows. Let $t = \Tag_{(a,b)}(x)$ and $t' = \Tag_{(a,b)}(x')$, and let $a_1,a_2,\ldots$ be the lexicographically-ordered set of solutions to the above system with this $t,t'$. Then define $\Pos_{(a,b)}(x,x') := i$ where $a = a_i$.
\begin{proof}
Let $E$ be a computationally unbounded adversary, and assume wlog that $E$ is deterministic. Recall the following game between $E$ and the challenger $C$ that defines $\Tag$'s security; here we split the final step into two parts as a technical convenience.
$G_{(A,B)} :=$
\begin{enumerate}
{\setlength\itemindent{1cm}
\item $E$ chooses and sends $x \in (GF(2^\lambda))^d$ to $C$.
\item $C$ samples $(A,B) \leftarrow GF(2^\lambda) \times GF(2^\lambda)$ and sends $T = \Tag_{(A,B)}(x)$ to $E$.
\item $E$ computes $X'$ (as a function of $T$) and sends it to $C$.
\item $E$ computes $T'$ (as a function of $T$) and sends it to $C$.
}
\end{enumerate}
$E$ wins $G_{(A,B)}$ if $X' \neq x$ and $\Tag_{(A,B)}(X') = T'$.
\smallskip
For the $X'$ that $E$ outputs, let $T^* := \Tag_{(A,B)}(X')$ denote its real tag. Because $X'$ is a function of $T$ only, it is clear that the strategy that maximizes $\Adv_E(A,B)$ is to try to compute in step 4 this value $T^*$, given only $T$. Thus denoting $\delta := \max_E(\Adv_E(A,B))$, we must have $\logdel \geq \hinf(T^*\, |\, T)$, and so it suffices to prove $\hinf(T^*\, |\, T) \geq k - \lambda - \log d$ as follows.
\begin{eqnarray*}
\hinf(T^*\, |\, T) & \geq & \hinf(T^*, T) - \lambda \\
& \geq & \hinf(T^*, T, \Pos_{(A,B)}(x,X')) - \lambda - \log d \\
& \geq & \hinf(A,B) - \lambda - \log d \\
& = & k - \lambda - \log d.
\end{eqnarray*}
The first two inequalities hold by Lemma \ref{lem-ent-2} since the support size of $T$ and $\Pos_{(A,B)}(x,X')$ is $2^{\lambda}$ and $d$, respectively. The third inequality holds by Lemma \ref{lem-ent-1}, because there is a deterministic $g$ such that $g(T^*, T, \Pos_{(A,B)}(x,X')) = (A,B)$; note that $g$ can compute $x$ and $X'$ because the former is fixed and the latter depends only on $T$.
\end{proof}
\begin{thebibliography}{1}
\bibitem{DORS04} Yevgeniy Dodis, Rafail Ostrovsky, Leonid Reyzin, and Adam Smith. Fuzzy extractors: how to generate strong keys from biometrics and other noisy data. In {\em EUROCRYPT 2004}.
\bibitem{GN93} Peter Gemmell and Moni Naor. Codes for interactive authentication. In {\em CRYPTO 1993}.
\end{thebibliography}
\end{document}