Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Precise control of mechanical modes in the quantum regime is a key resource for quantum technologies, offering promising pathways for quantum sensing with macroscopic systems and scalable architectures for quantum simulation. In this work, we realise a multimode mechanical cavity coupled to a superconducting Kerr resonator, which induces nonlinearity in the mechanical modes. The Kerr mode is realised by a flux-tunable SQUID array resonator, while the mechanical modes are implemented by a surface acoustic wave (SAW) cavity. Both mechanical and electromagnetic modes are individually addressable via dedicated measurement lines, enabling full spectroscopic characterisation. We introduce a straightforward protocol to measure the SQUID array resonator's participation ratio in the hybrid acoustic modes, quantifying the degree of hybridisation. The participation ratio reveals that our device operates at the onset of the multimode coupling regime, where multiple acoustic modes simultaneously interact with the nonlinear superconducting element. Furthermore, this platform allows controllable Kerr-type nonlinearities in multiple acoustic modes, with the participation ratio serving as the key parameter determining both the dissipation rates and nonlinear strengths of these hybridised modes. Close to the resonant regime, we measure a cross-Kerr interaction between seven pairs of mechanical modes, which is controllable via the SQUID array resonator detuning. These results establish a platform for engineering nonlinear multimode mechanical interactions, offering potential for future integration with superconducting qubits and implementation of multiple mechanical qubits.
Doppler-broadening thermometry (DBT) can be used as a calibration-free primary reference suitable for practical applications, e.g. reliably measuring temperatures over long periods of time in environments where sensor retrieval is impractical. We report on our proof-of-concept investigations into DBT with alkali metal vapour cells, with a particular focus on both absorption and frequency accuracy during scans. We reach sub-kelvin temperature accuracy, and experimental absorption fit residuals below $0.05\,\%$, in a simple setup. The outlook for portable, practical devices is bright, with clear prospects for future improvement.
Quantum key distribution (QKD) can provide secure key material between two parties without relying on assumptions about the computational power of an eavesdropper. QKD is performed over quantum links and quantum networks, systems which are resource-intensive to deploy and maintain. To evaluate and optimize performance prior to, during, and after deployment, realistic simulations with attention to physical realism are necessary. Quantum network simulators can simulate a variety of quantum and classical protocols and can assist in quantum network design and optimization by offering realism and flexibility beyond mathematical models which rely on simplifying assumptions and can be intractable to solve as network complexity increases. We use a versatile discrete event quantum network simulator to simulate the entanglement-based QKD protocol BBM92 and compare it to our experimental implementation and to existing theory. Furthermore, we simulate secure key rates in a repeater key distribution scenario for which no experimental implementations exist.
Quantum neural networks converge faster and achieve higher accuracy than classical models. However, data augmentation in quantum machine learning remains underexplored. To tackle data scarcity, we integrate quantum generative adversarial networks (QGANs) with hybrid quantum-classical neural networks (HQCNNs) to develop an augmentation framework. We propose two strategies: a general approach to enhance data processing and classification across HQCNNs, and a customized strategy that dynamically generates samples tailored to the HQCNN's performance on specific data categories, improving its ability to learn from complex datasets. Simulation experiments on the MNIST dataset demonstrate that QGAN outperforms traditional data augmentation methods and classical GANs. Compared to baseline DCGAN, QGAN achieves comparable performance with half the parameters, balancing efficiency and effectiveness. This suggests that QGANs can simplify models and generate high-quality data, enhancing HQCNN accuracy and performance. These findings pave the way for applying quantum data augmentation techniques in machine learning.
We present a method to quantify entanglement in mixed states of highly symmetric systems. Symmetry constrains interactions between parts and predicts the degeneracies of the states. While symmetry alone produces entangled eigenstates, the thermal mixed state (density) which contains all of the eigenstate densities weighted by their Boltzmann factors is not necessarily as entangled as the eigenstates themselves because generally the mixed state can be re-expressed as a sum over densities which are less entangled. The entanglement of the mixed state is the minimum obtained by considering all such re-expressions, but there is no well-defined approach to solving this problem generally. Our method uses symmetry to explicitly construct unentangled densities, which are then optimally included in the thermal mixed state, resulting in a quantitative measure of entanglement that accounts for the reduction of entanglement arising from degenerate states. We present results for several small spin systems.
We investigate phase estimation in a lossy interferometer using entangled coherent states, with particular focus on a scenario where no reference beam is employed. By calculating the quantum Fisher information, we reveal two key results: (1) the metrological equivalence between scenarios with and without a reference beam, established under ideal lossless conditions for the two-phase-shifting configuration,breaks down in the presence of photon loss, and (2) the pronounced inferior performance of entangled coherent states relative to NOON states, observed in the presence of a reference beam, disappears in its absence.
Supervised Quantum Machine Learning (QML) represents an intersection of quantum computing and classical machine learning, aiming to use quantum resources to support model training and inference. This paper reviews recent developments in supervised QML, focusing on methods such as variational quantum circuits, quantum neural networks, and quantum kernel methods, along with hybrid quantum-classical workflows. We examine recent experimental studies that show partial indications of quantum advantage and describe current limitations including noise, barren plateaus, scalability issues, and the lack of formal proofs of performance improvement over classical methods. The main contribution is a ten-year outlook (2025-2035) that outlines possible developments in supervised QML, including a roadmap describing conditions under which QML may be used in applied research and enterprise systems over the next decade.
Entanglement detection serves as a fundamental task in quantum information science, playing a critical role in quantum benchmarking and foundational studies. As the number of controllable qubits continues to increase, there emerges a pressing demand for scalable and robust entanglement detection protocols that can maintain high detection capability while requiring minimal resources. By integrating the positive partial transposition criterion with variational quantum interference, we develop an entanglement detection protocol that requires moderate classical and quantum computation resources. Numerical simulations demonstrate that this protocol attains high detection capability using only shallow quantum circuits, outperforming several widely-used entanglement detection methods. The protocol also exhibits strong resilience to circuit noise, ensuring its applicability across different physical platforms. Experimental implementation on a linear optical platform successfully identifies entanglement in a three-qubit mixed state that cannot be detected by conventional entanglement witnesses. Drawing upon the full potential of quantum and classical resources, our protocol paves a new path for efficient entanglement detection.
Superconducting nanowire single-photon detectors (SNSPDs) have emerged as essential devices that push the boundaries of photon detection with unprecedented sensitivity, ultrahigh timing precision, and broad spectral response. Recent advancements in materials engineering, superconducting electronics integration, and cryogenic system design are enabling the evolution of SNSPDs from single-pixel detectors toward scalable arrays and large-format single-photon time tagging cameras. This perspective article surveys the rapidly evolving technological landscape underpinning this transition, focusing on innovative superconducting materials, advanced multiplexed read-out schemes, and emerging cryo-compatible electronics. We highlight how these developments are set to profoundly impact diverse applications, including quantum communication networks, deep-tissue biomedical imaging, single-molecule spectroscopy, remote sensing with unprecedented resolution, and the detection of elusive dark matter signals. By critically discussing both current challenges and promising solutions, we aim to articulate a clear, coherent vision for the next generation of SNSPD-based quantum imaging systems.
The super-additivity of quantum channel capacity is an important feature of quantum information theory different from classical theory, which has been attracting attention. Recently a special channel called ``platypus channel'' exhibits super-additive quantum capacity when combined with qudit erasure channels. Here we consider the ``generalized platypus channel'', prove that it has computable channel capacities, such as both private and classical capacity equal to $1$, and in particular, the generalized platypus channel still displays the super-additivity of quantum capacity when combined with qudit erasure channels and multilevel amplitude damping channels respectively.
Existing quantum discrete adiabatic approaches are hindered by circuit depth that increases linearly with the number of evolution steps, a significant challenge for current quantum hardware with limited coherence times. To address this, we propose a co-designed framework that synergistically integrates dynamic circuit capabilities with real-time classical processing. This framework reformulates the quantum adiabatic evolution into discrete, dynamically adjustable segments. The unitary operator for each segment is optimized on-the-fly using classical computation, and circuit multiplexing techniques are leveraged to reduce the overall circuit depth scaling from $O(\text{steps}\times\text{depth}(U))$ to $O(\text{depth}(U))$. We implement and benchmark a quantum discrete adiabatic linear solver based on this framework for linear systems of $W \in \{2,4,8,16\}$ dimensions with condition numbers $\kappa \in \{10,20,30,40,50\}$. Our solver successfully overcomes previous depth limitations, maintaining over 80% solution fidelity even under realistic noise models. Key algorithmic optimizations contributing to this performance include a first-order approximation of the discrete evolution operator, a tailored dynamic circuit design exploiting real-imaginary component separation, and noise-resilient post-processing techniques.
We investigate the fundamental limits of converting light into useful work, with a focus on the role of quantum resources in energy harvesting processes. Specifically, we analyze how quantum coherence, non-Gaussianity, and entanglement affect the fluctuations in the energy output of bosonic quantum batteries. Our findings reveal potential pathways to enhance the efficiency and stability of energy extraction from quantum batteries, with implications for the development of quantum thermal machines at the nanoscale. Moreover, this work highlights a tangible thermodynamic quantum advantage, demonstrating how quantum effects can be harnessed to improve the performance of practical energy conversion tasks.
Continuous-variable quantum thermodynamics in the Gaussian regime provides a promising framework for investigating the energetic role of quantum correlations, particularly in optical systems. In this work, we introduce an entropy-free criterion for entanglement detection in bipartite Gaussian states, rooted in a distinct thermodynamic quantity: ergotropy -- the maximum extractable work via unitary operations. By defining the relative ergotropic gap, which quantifies the disparity between global and local ergotropy, we derive two independent analytical bounds that distinguish entangled from separable states. We show that for a broad class of quantum states, the bounds coincide, making the criterion both necessary and sufficient. We further extend our analysis to certain non-Gaussian states and observe analogous energy-based signatures of quantum correlations. These findings establish a direct operational link between entanglement and energy storage, offering an experimentally accessible approach to entanglement detection in continuous-variable optical platforms.
Gentle measurements of quantum states do not entirely collapse the initial state. Instead, they provide a post-measurement state at a prescribed trace distance $\alpha$ from the initial state together with a random variable used for quantum learning of the initial state. We introduce here the class of $\alpha-$locally-gentle measurements ($\alpha-$LGM) on a finite dimensional quantum system which are product measurements on product states and prove a strong quantum Data-Processing Inequality (qDPI) on this class using an improved relation between gentleness and quantum differential privacy. We further show a gentle quantum Neyman-Pearson lemma which implies that our qDPI is asymptotically optimal (for small $\alpha$). This inequality is employed to show that the necessary number of quantum states for prescribed accuracy $\epsilon$ is of order $1/(\epsilon^2 \alpha^2)$ for both quantum tomography and quantum state certification. Finally, we propose an $\alpha-$LGM called quantum Label Switch that attains these bounds. It is a general implementable method to turn any two-outcome measurement into an $\alpha-$LGM.
Accurate measurement of vector magnetic fields is critical for applications including navigation, geoscience, and space exploration. Nitrogen-vacancy (NV) center spin ensembles offer a promising solution for high-sensitivity vector magnetometry, as their different orientations in the diamond lattice measure different components of the magnetic field. However, the bias magnetic field typically used to separate signals from each NV orientation introduces inaccuracy from drifts in permanent magnets or coils. Here, we present a novel bias-field-free approach that labels the NV orientations via the direction of the microwave (MW) field in a variable-pulse-duration Ramsey sequence used to manipulate the spin ensemble. Numerical simulations demonstrate the possibility to isolate each orientation's signal with sub-nT accuracy even without precise MW field calibration, at only a moderate cost to sensitivity. We also provide proof-of-principle experimental validation, observing relevant features that evolve as expected with applied magnetic field. Looking forward, by removing a key source of drift, the proposed protocol lays the groundwork for future deployment of NV magnetometers in high-accuracy or long-duration missions.
Under strong drives, which are becoming necessary for fast high-fidelity operations, transmons can be structurally unstable. Due to chaotic effects, the computational manifold is no longer well separated from the remainder of the spectrum, which correlates with enhanced offset-charge sensitivity and destructive effects in readout. We show here that these detrimental effects can further propagate to other degrees of freedom, for example to neighboring qubits in a multi-qubit system. Specifically, a coherently driven transmon can act as a source of incoherent noise to another circuit element coupled to it. By using a full quantum model and a semiclassical analysis, we perform the noise spectroscopy of the driven transmon coupled to a spectator two-level system (TLS), and we show that, in a certain limit, the interaction with the driven transmon can be modeled as a stochastic diffusive process driving the TLS.
We quantify and optimize the predictability of local observables in bipartite quantum systems by employing the Bayes risk and the inference variance, two measures rooted in statistical learning theory. Specifically, we minimized these quantities when the prediction is improved by an additional quantum system, providing analytical expressions for arbitrary two-qubit states, and showcasing a connection with Einstein-Podolsky-Rosen steering criteria. Then, we embed our Bayes risk minimization into an entanglement-based quantum key distribution protocol, yielding asymptotically higher secure-key rates than standard BB84 under realistic noise. We applied these results to Bell states affected by local amplitude-damping noises, and spin correlation in top-antitop quark pairs from high-energy collisions.
Relaxation rates are key characteristics of quantum processes, as they determine how quickly a quantum system thermalizes, equilibrates, decoheres, and dissipates. While they play a crucial role in theoretical analyses, relaxation rates are also often directly accessible through experimental measurements. Recently, it was shown that for quantum processes governed by Markovian semigroups, the relaxation rates satisfy a universal constraint: the maximal rate is upper-bounded by the sum of all rates divided by the dimension of the Hilbert space. This bound, initially conjectured a few years ago, was only recently proven using classical Lyapunov theory. In this work, we present a new, purely algebraic proof of this constraint. Remarkably, our approach is not only more direct but also allows for a natural generalization beyond completely positive semigroups. We show that complete positivity can be relaxed to 2-positivity without affecting the validity of the constraint. This reveals that the bound is more subtle than previously understood: 2-positivity is necessary, but even when further relaxed to Schwarz maps, a slightly weaker -- yet still non-trivial -- universal constraint still holds. Finally, we explore the connection between these bounds and the number of steady states in quantum processes, uncovering a deeper structure underlying their behavior.