Learning with Errors (LWE): Difference between revisions
No edit summary |
No edit summary |
||
(One intermediate revision by the same user not shown) | |||
Line 1: | Line 1: | ||
<noinclude> | <noinclude> | ||
[[Category:Assumptions]] | [[Category:Assumptions]] | ||
The [[Learning with Errors (LWE)]] assumption is a common assumption, first introduced in [[ccref#reg05|[Reg05]]]. It has been used since to construct a wide variety of powerful primitives. | The [[Learning with Errors (LWE)]] assumption is a common assumption, first introduced in [[ccref#reg05|[Reg05]]]. It has been used since to construct a wide variety of powerful primitives. | ||
Line 18: | Line 17: | ||
== Formal assumption == | == Formal assumption == | ||
We say that the <math>(q,n,\eta)</math> [[Learning with Errors (LWE)]] assumption holds if for any efficient algorithm <math>D</math> and any polynomial <math>m(n)</math>, there is a negligible function <math>\nu</math> such that, | We say that the <math>(q,n,\eta)</math> [[Learning with Errors (LWE)]] assumption holds if for any efficient algorithm <math>D</math> and any polynomial <math>m(n)</math>, there is a negligible function <math>\nu</math> such that, | ||
<center><math>|\Pr[D(A,A\cdot s + e) = 1] - \Pr[D(A,z) = 1]| \le \nu( | <center><math>|\Pr[D(A,A\cdot s + e) = 1] - \Pr[D(A,z) = 1]| \le \nu(n),</math></center> | ||
where <math>A</math> is uniform from <math>\mathbb{Z}^{m(n)\times n}_q</math>, <math>s</math> is uniform from <math>\mathbb{Z}_q^n</math>, <math>e</math> is drawn from <math>\eta^{m(n)}</math>, and <math>z</math> is uniform from <math>\mathbb{Z}^{m(n)}_q</math>. Note that addition is computed modulo <math>q</math>. | where <math>A</math> is uniform from <math>\mathbb{Z}^{m(n)\times n}_q</math>, <math>s</math> is uniform from <math>\mathbb{Z}_q^n</math>, <math>e</math> is drawn from <math>\eta^{m(n)}</math>, and <math>z</math> is uniform from <math>\mathbb{Z}^{m(n)}_q</math>. Note that addition is computed modulo <math>q</math>. | ||
Latest revision as of 15:41, 4 July 2024
The Learning with Errors (LWE) assumption is a common assumption, first introduced in [Reg05]. It has been used since to construct a wide variety of powerful primitives.
Parameters
The LWE assumption has 4 key parameters:
- - the prime modulus
- - the dimension of the secret vector
- - the noise distribution over
- - the number of samples given to the adversary
For a better sense of these parameters, note that when is uniform over , the assumption will be trivially true. In contrast, when is constant, then one can solve the LWE problem via Gaussian elimination. The other key parameters are and , which control the number of possible secret vectors, since there are possible secret vectors. If this is too small, then an adversary could brute-force over all of them. The parameter is often ignored, as we assume adversaries run in polynomial time and that LWE is hard for any is polynomial in the security parameter.
We will write -LWE to make it clear which parameter regime is assumed to be hard.
Formal assumption
We say that the Learning with Errors (LWE) assumption holds if for any efficient algorithm and any polynomial , there is a negligible function such that,
where is uniform from , is uniform from , is drawn from , and is uniform from . Note that addition is computed modulo .
In more plain terms, we are assuming that it is hard to distinguish between a uniformly random vector and a vector which is a linear combination of the columns of after some noise has been applied. This is why the parameters (especially the noise) is critical to the security of the LWE assumption.
Known attacks
- The best known approach to solving LWE is to attack the Shortest-Vector Problem (SVP) directly.
Relationship to other assumptions
- The LWE problem has a quantum reduction to the Shortest-Vector Problem (SVP)
- There is a worst-case to average-case reduction for the LWE assumption
Implied primitives
- Public Key Encryption (PKE) can be built only assuming the hardness of LWE
Other notes
There are a few variations of the LWE assumption, specifically Learning Parity with Noise (LPN) and the Ring Learning with Errors (Ring-LWE).