[CIMR25] Secret-Key PIR from Random Linear Codes
Authors: Caicai Chen, Yuval Ishai, Tamer Mour, Alon Rosen | Venue: preprint | Source
Abstract
Private information retrieval (PIR) allows to privately read a chosen bit from an -bit database with bits of communication. Lin, Mook, and Wichs (STOC 2023) showed that by preprocessing into an encoded database , it suffices to access only bits of per query. This requires , and prohibitively large server circuit size.
We consider an alternative preprocessing model (Boyle et al. and Canetti et al., TCC 2017), where the encoding depends on a client’s short secret key. In this secret-key PIR (sk-PIR) model we construct a protocol with communication, for any constant , from the Learning Parity with Noise assumption in a parameter regime not known to imply public-key encryption. This is evidence against public-key encryption being necessary for sk-PIR.
Under a new conjecture related to the hardness of learning a hidden linear subspace of with noise, we construct sk-PIR with similar communication and encoding size in which the server is implemented by a Boolean circuit of size . This is the first candidate PIR scheme with such a circuit complexity.
Notes
They introduce the Learning Subspace with Noise (LSN) conjecture. They show how to build secret-key PIR from both LPN and LSN.
- Note that this will build very mildly doubly efficient PIR the way that BIPW17 build SK-DEPIR
- I guess secret-key PIR that is not doubly efficient could be interesting…?
How is SK-PIR different from prepreprocessing PIR?
- The secret key is independent of the PIR database
- So, first sk is generated → then used to encode a database
- But only the sk is given to decoding
- But I could preprecess the database and store the state encrypted on the server, then I could run a 2 round protocol where I download the encrypted state
- This gives a sk 2-round DEPIR with communication and computation
Circuit Sizes
- The paper focuses on minimizing the communication and circuit size, rather than the sublinear runtime in the RAM/cell-probe model
- Although, som eof their constructions also achieve this
Learning Subspace with Noise (LSN)
The learning subspace with noise assumption -LSN asserts
- Wait actually is this just taken from DKL09?
that for a uniformly random secret rank- matrix and any polynomial number of samples , it holds that: where for , is uniformly random in with probability. and otherwise, and .
Essentially this means that each you take a subspace fo the dimentional latent space. Then, give samples of this subspace planted randomly among real samples of the subspace.
Conjecture: For every , the -LSN assumption holds when and .
- So basically, all but a small fraction of the samples are random
LSN facts
- If , there is a -time LSN distinguisher
- For a constant code rate and , -LSN implies LPN with code dimension , code length , and noise rate
- LPN with noise rate implies a variant of LSN with the following noise pattern: Let be a random set of linearly independent columns of .Then, for each sampled codeword , flip each bit outside with probability.
- CDV21 showed that when , the search version of the LSN assumption of is equivalent to teh standard LPN assumption with noise rate