Quantum-Efficient Convolution through Sparse Matrix Encoding and Low-Depth Inner Product Circuits
Abstract
Convolution operations are foundational to classical image processing and modern deep learning architectures, yet their extension into the quantum domain has remained algorithmically and physically costly due to inefficient data encoding and prohibitive circuit complexity. In this work, we present a resource-efficient quantum algorithm that reformulates the convolution product as a structured matrix multiplication via a novel sparse reshaping formalism. Leveraging the observation that localized convolutions can be encoded as doubly block-Toeplitz matrix multiplications, we construct a quantum framework wherein sparse input patches are prepared using optimized key-value QRAM state encoding, while convolutional filters are represented as quantum states in superposition. The convolution outputs are computed through inner product estimation using a low-depth SWAP test circuit, which yields probabilistic amplitude information with reduced sampling overhead. Our architecture supports batched convolution across multiple filters using a generalized SWAP circuit. Compared to prior quantum convolutional approaches, our method eliminates redundant preparation costs, scales logarithmically with input size under sparsity, and enables direct integration into hybrid quantum-classical machine learning pipelines. This work provides a scalable and physically realizable pathway toward quantum-enhanced feature extraction, opening up new possibilities for quantum convolutional neural networks and data-driven quantum inference.