Learning parity with noise
The learning parity with noise (LPN) assumptions is a post-quantum candidate assumption that is related to the learning-with-errors assumption.
Assumption
For parameters and , we say the -LPN advantage of an adversary as where , , and , where and .
Then, we say that -LPN is hard if there exists a negligible function such that for all efficient adversaries and all polynomials ,
Noise Level
Depending on the setting of relative to , the -LPN problem has different regimes which are generally known to imply different results.
- Constant-noise: (weakest assumption)
- High-noise: for
- Mid-noise: for every
- Low-noise: for some . (strongest assumption)
Subexponential LPN
Some applications require assuming subexponential LPN, which means that they assume any algorithm which achieve non-negligible advantage in the LPN game requires running in time for some (often ).
In other words, any algorithm which runs in time has negligible advantage. Whereas, the normal LPN assumption only makes an assumption about polynomial time adversaries. Typically, this assumption is made only in the constant-noise regime, making it incomparable to more standard lower-noise normal LPN assumptions.
Known results
- Mid-noise (and hence low-noise) LPN implies PKE — Ale03
- Some works show that Low-noise LPN with implies OWF — BLVW19, YZW+19
- Low-noise LPN with , where , is known to imply PIR with slightly sublinear communication (through the use of trapdoor-hash-function)— AMR25
- Fully sublinear PIR from any flavor of LPN is open.
- SK-DEPIR can be built from mid and high-noise LPN — CIMR25 - Secret-Key PIR from Random Linear Codes
- CCA-PKE and OT can be built from subexponential LPN— YZ16
- Public-key PRCs can be built from subexponential LPN — CG24
Attacks
TODO