Learning with Errors (LWE): Difference between revisions

From Cryptology City
Jump to navigation Jump to search
(Created page with "<noinclude> Category:Assumptions Category:Pre-quantum The Learning with Errors (LWE) assumption is a common assumption, first introduced in [Reg05]. It has been used since to construct a wide variety of powerful primitives. == Parameters == The LWE assumption has 4 key parameters: * <math>q</math> - the prime modulus * <math>n</math> - the dimension of the secret vector * <math>\eta</math> - the noise distribution over <math>\mathbb{Z}_...")
 
No edit summary
 
(5 intermediate revisions by the same user not shown)
Line 1: Line 1:
<noinclude>
<noinclude>
[[Category:Assumptions]]
[[Category:Assumptions]]
[[Category:Pre-quantum]]


The [[Learning with Errors (LWE)]] assumption is a common assumption, first introduced in [[ccref#reg05|[Reg05]]]. It has been used since to construct a wide variety of powerful primitives.
The [[Learning with Errors (LWE)]] assumption is a common assumption, first introduced in [[ccref#reg05|[Reg05]]]. It has been used since to construct a wide variety of powerful primitives.
Line 11: Line 10:
* <math>n</math> - the dimension of the secret vector
* <math>n</math> - the dimension of the secret vector
* <math>\eta</math> - the noise distribution over <math>\mathbb{Z}_q</math>
* <math>\eta</math> - the noise distribution over <math>\mathbb{Z}_q</math>
* <math>m</math> - the number of samples revealed
* <math>m</math> - the number of samples given to the adversary
For a better sense of these parameters, note that when <math>\eta</math> is uniform over <math>\mathbb{Z}_q</math>, the assumption will be trivially true. In contrast, when <math>\eta</math> is constant, then one can solve the LWE problem via Gaussian elimination. The other key parameters are <math>n</math> and <math>q</math>, which control the number of possible secret vectors, since there are <math>q^n</math> possible secret vectors. If this is too small, then an adversary could brute-force over all of them. The <math>m</math> parameter is often ignored, as we assume adversaries run in polynomial time and that LWE is hard for any <math>m</math> is polynomial in the security parameter.
For a better sense of these parameters, note that when <math>\eta</math> is uniform over <math>\mathbb{Z}_q</math>, the assumption will be trivially true. In contrast, when <math>\eta</math> is constant, then one can solve the LWE problem via Gaussian elimination. The other key parameters are <math>n</math> and <math>q</math>, which control the number of possible secret vectors, since there are <math>q^n</math> possible secret vectors. If this is too small, then an adversary could brute-force over all of them. The <math>m</math> parameter is often ignored, as we assume adversaries run in polynomial time and that LWE is hard for any <math>m</math> is polynomial in the security parameter.


Line 17: Line 16:


== Formal assumption ==
== Formal assumption ==
We say that the <math>(q,n,\eta)</math> [[Learning with Errors (LWE)]] assumption holds if for any efficient algorithm <math>D</math> any polynomial <math>m</math>, there is a negligible function <math>\nu</math> such that,
We say that the <math>(q,n,\eta)</math> [[Learning with Errors (LWE)]] assumption holds if for any efficient algorithm <math>D</math> and any polynomial <math>m(n)</math>, there is a negligible function <math>\nu</math> such that,
<center><math>|\Pr[D(A,A\cdot s + e) = 1] - \Pr[D(A,z) = 1]| \le \nu(\lambda),</math></center>
<center><math>|\Pr[D(A,A\cdot s + e) = 1] - \Pr[D(A,z) = 1]| \le \nu(n),</math></center>
where <math>A</math> is uniform from <math>\mathbb{Z}^{m\times n}_q</math>, <math>s</math> is uniform from <math>\mathbb{Z}_q^n</math>, <math>e</math> is drawn from <math>\eta^m</math>, and <math>z</math> is uniform from <math>\mathbb{Z}_q^m</math>.
where <math>A</math> is uniform from <math>\mathbb{Z}^{m(n)\times n}_q</math>, <math>s</math> is uniform from <math>\mathbb{Z}_q^n</math>, <math>e</math> is drawn from <math>\eta^{m(n)}</math>, and <math>z</math> is uniform from <math>\mathbb{Z}^{m(n)}_q</math>. Note that addition is computed modulo <math>q</math>.


In more plain terms, we are assuming that it is hard to distinguish between a uniformly random vector and a vector which is a linear combination of the columns of <math>A</math> after some noise has been applied. This is why the parameters (especially the noise) is critical to the security of the LWE assumption.
In more plain terms, we are assuming that it is hard to distinguish between a uniformly random vector and a vector which is a linear combination of the columns of <math>A</math> after some noise has been applied. This is why the parameters (especially the noise) is critical to the security of the LWE assumption.
Line 27: Line 26:


== Relationship to other assumptions ==
== Relationship to other assumptions ==
* The [[LWE]] problem has a quantum reduction to the [[Shortest-Vector Problem]]
* The [[LWE]] problem has a quantum reduction to the [[Shortest-Vector Problem (SVP)]]
* There is a worst-case to average-case reduction for the [[LWE]] assumption
* There is a worst-case to average-case reduction for the [[LWE]] assumption



Latest revision as of 15:41, 4 July 2024


The Learning with Errors (LWE) assumption is a common assumption, first introduced in [Reg05]. It has been used since to construct a wide variety of powerful primitives.


Parameters

The LWE assumption has 4 key parameters:

  • - the prime modulus
  • - the dimension of the secret vector
  • - the noise distribution over
  • - the number of samples given to the adversary

For a better sense of these parameters, note that when is uniform over , the assumption will be trivially true. In contrast, when is constant, then one can solve the LWE problem via Gaussian elimination. The other key parameters are and , which control the number of possible secret vectors, since there are possible secret vectors. If this is too small, then an adversary could brute-force over all of them. The parameter is often ignored, as we assume adversaries run in polynomial time and that LWE is hard for any is polynomial in the security parameter.

We will write -LWE to make it clear which parameter regime is assumed to be hard.

Formal assumption

We say that the Learning with Errors (LWE) assumption holds if for any efficient algorithm and any polynomial , there is a negligible function such that,

where is uniform from , is uniform from , is drawn from , and is uniform from . Note that addition is computed modulo .

In more plain terms, we are assuming that it is hard to distinguish between a uniformly random vector and a vector which is a linear combination of the columns of after some noise has been applied. This is why the parameters (especially the noise) is critical to the security of the LWE assumption.

Known attacks

Relationship to other assumptions

Implied primitives

Other notes

There are a few variations of the LWE assumption, specifically Learning Parity with Noise (LPN) and the Ring Learning with Errors (Ring-LWE).