Learning parity with noise

The learning parity with noise (LPN) assumption is a post-quantum hardness assumption equivalent to the hardness of decoding a random linear code over . It can be viewed as LWE specialized to the binary field.

Assumption

For parameters , noise rate , and sample count , the LPN game asks an adversary to distinguish a noisy linear system from a uniformly random pair.

\begin{algorithm}
\algname{Game}
\caption{$\Game^{\mathrm{lpn}}_{k,\varepsilon,m,\calA}(\secpar)$}
\begin{algorithmic}
\State $\mathbf{A} \getsr \FF_2^{m \times k}$
\State $\mathbf{s} \getsr \FF_2^k$; $\mathbf{e} \getsr \mathrm{Ber}(\varepsilon)^m$
\State $b \getsr \{0,1\}$
\State $\mathbf{v}_0 := \mathbf{A} \cdot \mathbf{s} + \mathbf{e}$
\State $\mathbf{v}_1 \getsr \FF_2^m$
\State $b' \gets \calA(1^\secpar, \mathbf{A}, \mathbf{v}_b)$
\Return $[b' = b]$
\end{algorithmic}
\end{algorithm}

-LPN is hard if for all efficient and all polynomials ,

is negligible.

LPN is naturally stated over . However, it generalizes to any finite field : replace with throughout, and let each noise coordinate be zero with probability and uniformly random in with probability .

Noise Level

Depending on the setting of relative to , the -LPN problem has different regimes which are generally known to imply different results.

  • Constant-noise: (weakest assumption)
  • High-noise: for
  • Mid-noise: for every
  • Low-noise: for some . (strongest assumption)

If noise drops to , then there are folklore attacks which run in polynomial time and achieve constant advantage.

Subexponential LPN

Some applications require assuming subexponential LPN, which means that they assume any algorithm which achieve non-negligible advantage in the LPN game requires running in time for some (often ).

In other words, any algorithm which runs in time has negligible advantage. Whereas, the normal LPN assumption only makes an assumption about polynomial time adversaries. Typically, this assumption is made only in the constant-noise regime, making it incomparable to more standard lower-noise normal LPN assumptions.

Known results

Attacks

TODO

Variations

Sparse Learning Parity with Noise

Sparse LPN replaces the uniformly random matrix with one whose rows are -sparse: each row is sampled uniformly from all binary vectors of Hamming weight exactly . The secret and noise distributions are unchanged. For , the matrix can be stored and multiplied far more efficiently, making Sparse LPN particularly attractive for pseudorandom correlation generator (PCG) constructions.

\begin{algorithm}
\algname{Game}
\caption{$\Game^{\mathrm{slpn}}_{k,\varepsilon,m,d,\calA}(\secpar)$}
\begin{algorithmic}
\State $\mathbf{A} \getsr \FF_2^{m \times k}$ with each row sampled uniformly from weight-$d$ vectors
\State $\mathbf{s} \getsr \FF_2^k$; $\mathbf{e} \getsr \mathrm{Ber}(\varepsilon)^m$
\State $b \getsr \{0,1\}$
\State $\mathbf{v}_0 := \mathbf{A} \cdot \mathbf{s} + \mathbf{e}$
\State $\mathbf{v}_1 \getsr \FF_2^m$
\State $b' \gets \calA(1^\secpar, \mathbf{A}, \mathbf{v}_b)$
\Return $[b' = b]$
\end{algorithmic}
\end{algorithm}

-Sparse LPN is hard if for all efficient and all polynomials ,

is negligible. Note that Sparse LPN with reduces to standard LPN, so sparse hardness is a stronger assumption for smaller .

Known results

Ring-LPN

Ring-LPN replaces the matrix with multiplication by a random polynomial for a fixed polynomial of degree . The secret is and the LPN sample is for small noise . The ring structure reduces the public key from bits to bits and enables faster computation via polynomial multiplication.

Ring-LPN underlies practical authentication protocols (e.g., Lapin) and efficient pseudorandom correlation generator constructions.