Mitigating photon loss in linear optical quantum circuits: classical postprocessing methods outperforming postselection

James Mills, Rawad Mezher

Research output: Working paperPreprint

Abstract / Description of output

Photon loss rates set an effective upper limit on the size of computations that can be run on current linear optical quantum devices. We present a family of techniques to mitigate the effects of photon loss on both output probabilities and expectation values derived from noisy linear optical circuits composed of an input of $n$ photons, an $m$-mode interferometer, and $m$ single photon detectors. Central to these techniques is the construction of objects called recycled probabilities. Recycled probabilities are constructed from output statistics affected by loss, and are designed to amplify the signal of the ideal (lossless) probabilities. Classical postprocessing techniques then take recycled probabilities as input and output a set of loss-mitigated probabilities, or expectation values. We provide analytical and numerical evidence that these methods can be applied, up to large sample sizes, to produce more accurate outputs than those obtained from postselection - which is currently the standard method of coping with photon loss when sampling from discrete variable linear optical quantum circuits. In contrast, we provide strong evidence that the popular zero noise extrapolation technique cannot improve on on the performance of postselection for any photon loss rate.
Original languageEnglish
PublisherArXiv
Pages1-58
Number of pages58
DOIs
Publication statusPublished - 3 May 2024

Fingerprint

Dive into the research topics of 'Mitigating photon loss in linear optical quantum circuits: classical postprocessing methods outperforming postselection'. Together they form a unique fingerprint.

Cite this