Learning with Errors (LWE): Difference between revisions

From Cryptology City
Jump to navigation Jump to search
No edit summary
No edit summary
Line 11: Line 11:
* <math>n</math> - the dimension of the secret vector
* <math>n</math> - the dimension of the secret vector
* <math>\eta</math> - the noise distribution over <math>\mathbb{Z}_q</math>
* <math>\eta</math> - the noise distribution over <math>\mathbb{Z}_q</math>
* <math>m</math> - the number of samples revealed
* <math>m</math> - the number of samples given to the adversary
For a better sense of these parameters, note that when <math>\eta</math> is uniform over <math>\mathbb{Z}_q</math>, the assumption will be trivially true. In contrast, when <math>\eta</math> is constant, then one can solve the LWE problem via Gaussian elimination. The other key parameters are <math>n</math> and <math>q</math>, which control the number of possible secret vectors, since there are <math>q^n</math> possible secret vectors. If this is too small, then an adversary could brute-force over all of them. The <math>m</math> parameter is often ignored, as we assume adversaries run in polynomial time and that LWE is hard for any <math>m</math> is polynomial in the security parameter.
For a better sense of these parameters, note that when <math>\eta</math> is uniform over <math>\mathbb{Z}_q</math>, the assumption will be trivially true. In contrast, when <math>\eta</math> is constant, then one can solve the LWE problem via Gaussian elimination. The other key parameters are <math>n</math> and <math>q</math>, which control the number of possible secret vectors, since there are <math>q^n</math> possible secret vectors. If this is too small, then an adversary could brute-force over all of them. The <math>m</math> parameter is often ignored, as we assume adversaries run in polynomial time and that LWE is hard for any <math>m</math> is polynomial in the security parameter.


Line 19: Line 19:
We say that the <math>(q,n,\eta)</math> [[Learning with Errors (LWE)]] assumption holds if for any efficient algorithm <math>D</math> and any polynomial <math>m(n)</math>, there is a negligible function <math>\nu</math> such that,
We say that the <math>(q,n,\eta)</math> [[Learning with Errors (LWE)]] assumption holds if for any efficient algorithm <math>D</math> and any polynomial <math>m(n)</math>, there is a negligible function <math>\nu</math> such that,
<center><math>|\Pr[D(A,A\cdot s + e) = 1] - \Pr[D(A,z) = 1]| \le \nu(\lambda),</math></center>
<center><math>|\Pr[D(A,A\cdot s + e) = 1] - \Pr[D(A,z) = 1]| \le \nu(\lambda),</math></center>
where <math>A</math> is uniform from <math>\mathbb{Z}^{m(n)\times n}_q</math>, <math>s</math> is uniform from <math>\mathbb{Z}_q^n</math>, <math>e</math> is drawn from <math>\eta^{m(n)</math>, and <math>z</math> is uniform from <math>\mathbb{Z}_q^{m(n)</math>. Note that addition is computed modulo <math>q</math>.
where <math>A</math> is uniform from <math>\mathbb{Z}^{m(n)\times n}_q</math>, <math>s</math> is uniform from <math>\mathbb{Z}_q^n</math>, <math>e</math> is drawn from <math>\eta^{m(n)}</math>, and <math>z</math> is uniform from <math>\mathbb{Z}^{m(n)}_q</math>. Note that addition is computed modulo <math>q</math>.


In more plain terms, we are assuming that it is hard to distinguish between a uniformly random vector and a vector which is a linear combination of the columns of <math>A</math> after some noise has been applied. This is why the parameters (especially the noise) is critical to the security of the LWE assumption.
In more plain terms, we are assuming that it is hard to distinguish between a uniformly random vector and a vector which is a linear combination of the columns of <math>A</math> after some noise has been applied. This is why the parameters (especially the noise) is critical to the security of the LWE assumption.

Revision as of 15:39, 4 July 2024


The Learning with Errors (LWE) assumption is a common assumption, first introduced in [Reg05]. It has been used since to construct a wide variety of powerful primitives.


Parameters

The LWE assumption has 4 key parameters:

  • - the prime modulus
  • - the dimension of the secret vector
  • - the noise distribution over
  • - the number of samples given to the adversary

For a better sense of these parameters, note that when is uniform over , the assumption will be trivially true. In contrast, when is constant, then one can solve the LWE problem via Gaussian elimination. The other key parameters are and , which control the number of possible secret vectors, since there are possible secret vectors. If this is too small, then an adversary could brute-force over all of them. The parameter is often ignored, as we assume adversaries run in polynomial time and that LWE is hard for any is polynomial in the security parameter.

We will write -LWE to make it clear which parameter regime is assumed to be hard.

Formal assumption

We say that the Learning with Errors (LWE) assumption holds if for any efficient algorithm and any polynomial , there is a negligible function such that,

where is uniform from , is uniform from , is drawn from , and is uniform from . Note that addition is computed modulo .

In more plain terms, we are assuming that it is hard to distinguish between a uniformly random vector and a vector which is a linear combination of the columns of after some noise has been applied. This is why the parameters (especially the noise) is critical to the security of the LWE assumption.

Known attacks

Relationship to other assumptions

Implied primitives

Other notes

There are a few variations of the LWE assumption, specifically Learning Parity with Noise (LPN) and the Ring Learning with Errors (Ring-LWE).