Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Hardware crosstalk in multi-tenant superconducting quantum computers constitutes a significant security threat, enabling adversaries to inject targeted errors across tenant boundaries. We present the first end-to-end framework for mapping physical pulse-level attacks to interpretable logical error channels, integrating density-matrix simulation, quantum process tomography (QPT), and a novel isometry-based circuit extraction method. Our pipeline reconstructs the complete induced error channel and fits an effective logical circuit model, revealing a fundamentally asymmetric attack mechanism: one adversarial qubit acts as a driver to set the induced logical rotation, while a second, the catalyst, refines the attack's coherence. Demonstrated on a linear three-qubit system, our approach shows that such attacks can significantly disrupt diverse quantum protocols, sometimes reducing accuracy to random guessing, while remaining effective and stealthy even under realistic hardware parameter variations. We further propose a protocol-level detection strategy based on observable attack signatures, showing that stealthy attacks can be exposed through targeted monitoring and providing a foundation for future defense-in-depth in quantum cloud platforms.
Deep learning (DL) compilers are core infrastructure in modern DL systems, offering flexibility and scalability beyond vendor-specific libraries. This work uncovers a fundamental vulnerability in their design: can an official, unmodified compiler alter a model's semantics during compilation and introduce hidden backdoors? We study both adversarial and natural settings. In the adversarial case, we craft benign models where triggers have no effect pre-compilation but become effective backdoors after compilation. Tested on six models, three commercial compilers, and two hardware platforms, our attack yields 100% success on triggered inputs while preserving normal accuracy and remaining undetected by state-of-the-art detectors. The attack generalizes across compilers, hardware, and floating-point settings. In the natural setting, we analyze the top 100 HuggingFace models (including one with 220M+ downloads) and find natural triggers in 31 models. This shows that compilers can introduce risks even without adversarial manipulation. Our results reveal an overlooked threat: unmodified DL compilers can silently alter model semantics. To our knowledge, this is the first work to expose inherent security risks in DL compiler design, opening a new direction for secure and trustworthy ML.
Software-based memory-erasure protocols are two-party communication protocols where a verifier instructs a computational device to erase its memory and send a proof of erasure. They aim at guaranteeing that low-cost IoT devices are free of malware by putting them back into a safe state without requiring secure hardware or physical manipulation of the device. Several software-based memory-erasure protocols have been introduced and theoretically analysed. Yet, many of them have not been tested for their feasibility, performance and security on real devices, which hinders their industry adoption. This article reports on the first empirical analysis of software-based memory-erasure protocols with respect to their security, erasure guarantees, and performance. The experimental setup consists of 3 modern IoT devices with different computational capabilities, 7 protocols, 6 hash-function implementations, and various performance and security criteria. Our results indicate that existing software-based memory-erasure protocols are feasible, although slow devices may take several seconds to erase their memory and generate a proof of erasure. We found that no protocol dominates across all empirical settings, defined by the computational power and memory size of the device, the network speed, and the required level of security. Interestingly, network speed and hidden constants within the protocol specification played a more prominent role in the performance of these protocols than anticipated based on the related literature. We provide an evaluation framework that, given a desired level of security, determines which protocols offer the best trade-off between performance and erasure guarantees.
Existing dynamic vulnerability patching techniques are not well-suited for embedded devices, especially mission-critical ones such as medical equipment, as they have limited computational power and memory but uninterrupted service requirements. Those devices often lack sufficient idle memory for dynamic patching, and the diverse architectures of embedded systems further complicate the creation of patch triggers that are compatible across various system kernels and hardware platforms. To address these challenges, we propose a hot patching framework called StackPatch that facilitates patch development based on stack frame reconstruction. StackPatch introduces different triggering strategies to update programs stored in memory units. We leverage the exception-handling mechanisms commonly available in embedded processors to enhance StackPatch's adaptability across different processor architectures for control flow redirection. We evaluated StackPatch on embedded devices featuring three major microcontroller (MCU) architectures: ARM, RISC-V, and Xtensa. In the experiments, we used StackPatch to successfully fix 102 publicly disclosed vulnerabilities in real-time operating systems (RTOS). We applied patching to medical devices, soft programmable logic controllers (PLCs), and network services, with StackPatch consistently completing each vulnerability remediation in less than 260 MCU clock cycles.
The specification of state machine replication (SMR) has no requirement on the final total order of commands. In blockchains based on SMR, however, order matters, since different orders could provide their clients with different financial rewards. Ordered consensus augments the specification of SMR to include specific guarantees on such order, with a focus on limiting the influence of Byzantine nodes. Real-world ordering manipulations, however, can and do happen even without Byzantine replicas, typically because of factors, such as faster networks or closer proximity to the blockchain infrastructure, that give some clients an unfair advantage. To address this challenge, this paper proceeds to extend ordered consensus by requiring it to also support equal opportunity, a concrete notion of fairness, widely adopted in social sciences. Informally, equal opportunity requires that two candidates who, according to a set of criteria deemed to be relevant, are equally qualified for a position (in our case, a specific slot in the SMR total order), should have an equal chance of landing it. We show how randomness can be leveraged to keep bias in check, and, to this end, introduce the secret random oracle (SRO), a system component that generates randomness in a fault-tolerant manner. We describe two SRO designs based, respectively, on trusted hardware and threshold verifiable random functions, and instantiate them in Bercow, a new ordered consensus protocol that, by approximating equal opportunity up to within a configurable factor, can effectively mitigate well-known ordering attacks in SMR-based blockchains.
Organizations use data lakes to store and analyze sensitive data. But hackers may compromise data lake storage to bypass access controls and access sensitive data. To address this, we propose Membrane, a system that (1) cryptographically enforces data-dependent access control views over a data lake, (2) without restricting the analytical queries data scientists can run. We observe that data lakes, unlike DBMSes, disaggregate computation and storage into separate trust domains, making at-rest encryption sufficient to defend against remote attackers targeting data lake storage, even when running analytical queries in plaintext. This leads to a new system design for Membrane that combines encryption at rest with SQL-aware encryption. Using block ciphers, a fast symmetric-key primitive with hardware acceleration in CPUs, we develop a new SQL-aware encryption protocol well-suited to at-rest encryption. Membrane adds overhead only at the start of an interactive session due to decrypting views, delaying the first query result by up to $\approx 20\times$; subsequent queries process decrypted data in plaintext, resulting in low amortized overhead.
Authors of cryptographic software are well aware that their code should not leak secrets through its timing behavior, and, until 2018, they believed that following industry-standard constant-time coding guidelines was sufficient. However, the revelation of the Spectre family of speculative execution attacks injected new complexities. To block speculative attacks, prior work has proposed annotating the program's source code to mark secret data, with hardware using this information to decide when to speculate (i.e., when only public values are involved) or not (when secrets are in play). While these solutions are able to track secret information stored on the heap, they suffer from limitations that prevent them from correctly tracking secrets on the stack, at a cost in performance. This paper introduces SecSep, a transformation framework that rewrites assembly programs so that they partition secret and public data on the stack. By moving from the source-code level to assembly rewriting, SecSep is able to address limitations of prior work. The key challenge in performing this assembly rewriting stems from the loss of semantic information through the lengthy compilation process. The key innovation of our methodology is a new variant of typed assembly language (TAL), Octal, which allows us to address this challenge. Assembly rewriting is driven by compile-time inference within Octal. We apply our technique to cryptographic programs and demonstrate that it enables secure speculation efficiently, incurring a low average overhead of $1.2\%$.
Federated Learning (FL) enables decentralized model training without sharing raw data, offering strong privacy guarantees. However, existing FL protocols struggle to defend against Byzantine participants, maintain model utility under non-independent and identically distributed (non-IID) data, and remain lightweight for edge devices. Prior work either assumes trusted hardware, uses expensive cryptographic tools, or fails to address privacy and robustness simultaneously. We propose DSFL, a Dual-Server Byzantine-Resilient Federated Learning framework that addresses these limitations using a group-based secure aggregation approach. Unlike LSFL, which assumes non-colluding semi-honest servers, DSFL removes this dependency by revealing a key vulnerability: privacy leakage through client-server collusion. DSFL introduces three key innovations: (1) a dual-server secure aggregation protocol that protects updates without encryption or key exchange, (2) a group-wise credit-based filtering mechanism to isolate Byzantine clients based on deviation scores, and (3) a dynamic reward-penalty system for enforcing fair participation. DSFL is evaluated on MNIST, CIFAR-10, and CIFAR-100 under up to 30 percent Byzantine participants in both IID and non-IID settings. It consistently outperforms existing baselines, including LSFL, homomorphic encryption methods, and differential privacy approaches. For example, DSFL achieves 97.15 percent accuracy on CIFAR-10 and 68.60 percent on CIFAR-100, while FedAvg drops to 9.39 percent under similar threats. DSFL remains lightweight, requiring only 55.9 ms runtime and 1088 KB communication per round.
Detecting unseen ransomware is a critical cybersecurity challenge where classical machine learning often fails. While Quantum Machine Learning (QML) presents a potential alternative, its application is hindered by the dimensionality gap between classical data and quantum hardware. This paper empirically investigates a hybrid framework using a Variational Quantum Classifier (VQC) interfaced with a high-dimensional dataset via Principal Component Analysis (PCA). Our analysis reveals a dual challenge for practical QML. A significant information bottleneck was evident, as even the best performing 12-qubit VQC fell short of the classical baselines 97.7\% recall. Furthermore, a non-monotonic performance trend, where performance degraded when scaling from 4 to 8 qubits before improving at 12 qubits suggests a severe trainability issue. These findings highlight that unlocking QMLs potential requires co-developing more efficient data compression techniques and robust quantum optimization strategies.
To address the risks of increasingly capable AI systems, we introduce a hardware-level off-switch that embeds thousands of independent "security blocks" in each AI accelerator. This massively redundant architecture is designed to prevent unauthorized chip use, even against sophisticated physical attacks. Our main security block design uses public key cryptography to check the authenticity of authorization licenses, and randomly generated nonces to prevent replay attacks. We evaluate attack vectors and present additional security block variants that could be added for greater robustness. Security blocks can be built with standard circuit components, ensuring compatibility with existing semiconductor manufacturing processes. With embedded security blocks, the next generation of AI accelerators could be more robustly defended against dangerous misuse.
Microcontroller units (MCUs) are widely used in embedded devices due to their low power consumption and cost-effectiveness. MCU firmware controls these devices and is vital to the security of embedded systems. However, performing dynamic security analyses for MCU firmware has remained challenging due to the lack of usable execution environments -- existing dynamic analyses cannot run on physical devices (e.g., insufficient computational resources), while building emulators is costly due to the massive amount of heterogeneous hardware, especially peripherals. Our work is based on the insight that MCU peripherals can be modeled in a two-fold manner. At the structural level, peripherals have diverse implementations but we can use a limited set of primitives to abstract peripherals because their hardware implementations are based on common hardware concepts. At the semantic level, peripherals have diverse functionalities. However, we can use a single unified semantic model to describe the same kind of peripherals because they exhibit similar functionalities. Building on this, we propose FlexEmu, a flexible MCU peripheral emulation framework. Once semantic models are created, FlexEmu automatically extracts peripheral-specific details to instantiate models and generate emulators accordingly. We have successfully applied FlexEmu to model 12 kinds of MCU peripherals. Our evaluation on 90 firmware samples across 15 different MCU platforms shows that the automatically generated emulators can faithfully replicate hardware behaviors and achieve a 98.48% unit test passing rate, outperforming state-of-the-art approaches. To demonstrate the implications of FlexEmu on firmware security, we use the generated emulators to fuzz three popular RTOSes and uncover 10 previously unknown bugs.
Backdoor (trojan) attacks embed hidden, controllable behaviors into machine-learning models so that models behave normally on benign inputs but produce attacker-chosen outputs when a trigger is present. This survey reviews the rapidly growing literature on backdoor attacks and defenses in the computer-vision domain. We introduce a multi-dimensional taxonomy that organizes attacks and defenses by injection stage (dataset poisoning, model/parameter modification, inference-time injection), trigger type (patch, blended/frequency, semantic, transformation), labeling strategy (dirty-label vs. clean-label / feature-collision), representation stage (instance-specific, manifold/class-level, neuron/parameter hijacking, distributed encodings), and target task (classification, detection, segmentation, video, multimodal). For each axis we summarize representative methods, highlight evaluation practices, and discuss where defenses succeed or fail. For example, many classical sanitization and reverse-engineering tools are effective against reusable patch attacks but struggle with input-aware, sample-specific, or parameter-space backdoors and with transfer via compromised pre-trained encoders or hardware bit-flips. We synthesize trends, identify persistent gaps (supply-chain and hardware threats, certifiable defenses, cross-task benchmarks), and propose practical guidelines for threat-aware evaluation and layered defenses. This survey aims to orient researchers and practitioners to the current threat landscape and pressing research directions in secure computer vision.
In production tool-use agents (e.g., retrieval $\to$ summarization $\to$ calculator), routers must know when to stop exploring while preserving local DP and leaving an auditable trail. We present run-wise early-stopping certificates for perturb-and-MAP (PaM) best-first search on context-indexed prefix DAGs whose children partition the leaves. We couple realized path scores and pruning keys to a single exponential race realized lazily via offset propagation. With exact leaf counts $N(v)$, lazy reuse at winners and independent residuals yield an Exact mode with a sound halting rule based on Key$(v) = M_tau(v) - \log t(v)$, where $t(v)$ is the minimum arrival time among leaves under $v$. With only upper bounds $N_{ub} \ge N$, a Surrogate mode uses a parent-anchored surrogate race without winner reuse; because $-\log \hat t \ge -\log t$, the frontier invariant holds and stopping remains sound. We add a compiler from shared-node DAGs to prefix DAGs, local finiteness checks, a SuffixCountDP routine for exact counts with safe downgrades, a validator-side tightening term $\kappa = \log(N/N_{ub})$, and an auditable ledger/validator that replays runs deterministically. We also give an absolute LogSumExp tail bound, an acyclicity certificate, and a fallback PRF-per-leaf scheme (NoCert) whose work matches a realized-score best-first baseline up to a small per-node overhead. Finally, we integrate a price/latency/$(\epsilon, \delta)$-aware multi-LLM controller and DP-trained LoRA adapters chosen at runtime; these choices do not affect the two-mode frontier invariants. We report Mac/commodity-hardware reproducible results, a small real tool-use pipeline, and validator-checked audit trails, with code and ledgers provided.
As on-device LLMs(e.g., Apple on-device Intelligence) are widely adopted to reduce network dependency, improve privacy, and enhance responsiveness, verifying the legitimacy of models running on local devices becomes critical. Existing attestation techniques are not suitable for billion-parameter Large Language Models (LLMs), struggling to remain both time- and memory-efficient while addressing emerging threats in the LLM era. In this paper, we present AttestLLM, the first-of-its-kind attestation framework to protect the hardware-level intellectual property (IP) of device vendors by ensuring that only authorized LLMs can execute on target platforms. AttestLLM leverages an algorithm/software/hardware co-design approach to embed robust watermarking signatures onto the activation distributions of LLM building blocks. It also optimizes the attestation protocol within the Trusted Execution Environment (TEE), providing efficient verification without compromising inference throughput. Extensive proof-of-concept evaluations on LLMs from Llama, Qwen, and Phi families for on-device use cases demonstrate AttestLLM's attestation reliability, fidelity, and efficiency. Furthermore, AttestLLM enforces model legitimacy and exhibits resilience against model replacement and forgery attacks.
Residual cross-talk in superconducting qubit devices creates a security vulnerability for emerging quantum cloud services. We demonstrate a Clifford-only Quantum Rowhammer attack-using just X and CNOT gates-that injects faults on IBM's 127-qubit Eagle processors without requiring pulse-level access. Experiments show that targeted hammering induces localized errors confined to the attack cycle and primarily manifests as phase noise, as confirmed by near 50% flip rates under Hadamard-basis probing. A full lattice sweep maps QR's spatial and temporal behavior, revealing reproducible corruption limited to qubits within two coupling hops and rapid recovery in subsequent benign cycles. Finally, we leverage these properties to outline a prime-and-probe covert channel, demonstrating that the clear separability between hammered and benign rounds enables highly reliable signaling without error correction. These findings underscore the need for hardware-level isolation and scheduler-aware defenses as multi-tenant quantum computing becomes standard.
Coverage-guided fuzzing has been widely applied to address zero-day vulnerabilities in general-purpose software and operating systems. This approach relies on instrumenting the target code at compile time. However, applying it to industrial systems remains challenging, due to proprietary and closed-source compiler toolchains and lack of access to source code. FuzzBox addresses these limitations by integrating emulation with fuzzing: it dynamically instruments code during execution in a virtualized environment, for the injection of fuzz inputs, failure detection, and coverage analysis, without requiring source code recompilation and hardware-specific dependencies. We show the effectiveness of FuzzBox through experiments in the context of a proprietary MILS (Multiple Independent Levels of Security) hypervisor for industrial applications. Additionally, we analyze the applicability of FuzzBox across commercial IoT firmware, showcasing its broad portability.
A fault can occur naturally or intentionally. However, intentionally injecting faults into hardware accelerators of Post-Quantum Cryptographic (PQC) algorithms may leak sensitive information. This intentional fault injection in side-channel attacks compromises the reliability of PQC implementations. The recently NIST-standardized key encapsulation mechanism (KEM), Kyber may also leak information at the hardware implementation level. This work proposes three efficient and lightweight recomputation-based fault detection methods for Barrett Reduction in the Cooley-Tukey Butterfly Unit (CT-BU) of Kyber on a Field Programmable Gate Array (FPGA). The CT-BU and Barrett Reduction are fundamental components in structured lattice-based PQC algorithms, including Kyber, NTRU, Falcon, CRYSTALS-Dilithium, etc. This paper introduces a new algorithm, Recomputation with Swapped Operand (RESWO), for fault detection. While Recomputation with Negated Operand (RENO) and Recomputation with Shifted Operand (RESO) are existing methods used in other PQC hardware algorithms. To the best of our knowledge, RENO and RESO have never been used in Barrett Reduction before. The proposed RESWO method consumes a similar number of slices compared to RENO and RESO. However, RESWO shows lesser delay compared to both RENO and RESO. The fault detection efficiency of RESWO, RENO, and RESO is nearly 100%.
The growing use of FPGAs in reconfigurable systems introducessecurity risks through malicious bitstreams that could cause denial-of-service (DoS), data leakage, or covert attacks. We investigated chip-level hardware malicious payload in embedded systems and proposed a supervised machine learning method to detect malicious bitstreams via static byte-level features. Our approach diverges from existing methods by analyzing bitstreams directly at the binary level, enabling real-time detection without requiring access to source code or netlists. Bitstreams were sourced from state-of-the-art (SOTA) benchmarks and re-engineered to target the Xilinx PYNQ-Z1 FPGA Development Board. Our dataset included 122 samples of benign and malicious configurations. The data were vectorized using byte frequency analysis, compressed using TSVD, and balanced using SMOTE to address class imbalance. The evaluated classifiers demonstrated that Random Forest achieved a macro F1-score of 0.97, underscoring the viability of real-time Trojan detection on resource-constrained systems. The final model was serialized and successfully deployed via PYNQ to enable integrated bitstream analysis.