Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Static and hard-coded layer-two network identifiers are well known to present security vulnerabilities and endanger user privacy. In this work, we introduce a new privacy attack against Wi-Fi access points listed on secondhand marketplaces. Specifically, we demonstrate the ability to remotely gather a large quantity of layer-two Wi-Fi identifiers by programmatically querying the eBay marketplace and applying state-of-the-art computer vision techniques to extract IEEE 802.11 BSSIDs from the seller's posted images of the hardware. By leveraging data from a global Wi-Fi Positioning System (WPS) that geolocates BSSIDs, we obtain the physical locations of these devices both pre- and post-sale. In addition to validating the degree to which a seller's location matches the location of the device, we examine cases of device movement -- once the device is sold and then subsequently re-used in a new environment. Our work highlights a previously unrecognized privacy vulnerability and suggests, yet again, the strong need to protect layer-two network identifiers.
Quantum neural networks (QNNs) leverage quantum computing to create powerful and efficient artificial intelligence models capable of solving complex problems significantly faster than traditional computers. With the fast development of quantum hardware technology, such as superconducting qubits, trapped ions, and integrated photonics, quantum computers may become reality, accelerating the applications of QNNs. However, preparing quantum circuits and optimizing parameters for QNNs require quantum hardware support, expertise, and high-quality data. How to protect intellectual property (IP) of QNNs becomes an urgent problem to be solved in the era of quantum computing. We make the first attempt towards IP protection of QNNs by watermarking. To this purpose, we collect classical clean samples and trigger ones, each of which is generated by adding a perturbation to a clean sample, associated with a label different from the ground-truth one. The host QNN, consisting of quantum encoding, quantum state transformation, and quantum measurement, is then trained from scratch with the clean samples and trigger ones, resulting in a watermarked QNN model. During training, we introduce sample grouped and paired training to ensure that the performance on the downstream task can be maintained while achieving good performance for watermark extraction. When disputes arise, by collecting a mini-set of trigger samples, the hidden watermark can be extracted by analyzing the prediction results of the target model corresponding to the trigger samples, without accessing the internal details of the target QNN model, thereby verifying the ownership of the model. Experiments have verified the superiority and applicability of this work.
Machine-learning systems continue to advance at a rapid pace, demonstrating remarkable utility in various fields and disciplines. As these systems continue to grow in size and complexity, a nascent industry is emerging which aims to bring machine-learning-as-a-service (MLaaS) to market. Outsourcing the operation and training of these systems to powerful hardware carries numerous advantages, but challenges arise when privacy and the correctness of work carried out must be ensured. Recent advancements in the field of zero-knowledge cryptography have led to a means of generating arguments of integrity for any computation, which in turn can be efficiently verified by any party, in any place, at any time. In this work we prove the correct training of a differentially-private (DP) linear regression over a dataset of 50,000 samples on a single machine in less than 6 minutes, verifying the entire computation in 0.17 seconds. To our knowledge, this result represents the fastest known instance in the literature of provable-DP over a dataset of this size. We believe this result constitutes a key stepping-stone towards end-to-end private MLaaS.
The identification of the devices from which a message is received is part of security mechanisms to ensure authentication in wireless communications. Conventional authentication approaches are cryptography-based, which, however, are usually computationally expensive and not adequate in the Internet of Things (IoT), where devices tend to be low-cost and with limited resources. This paper provides a comprehensive survey of physical layer-based device fingerprinting, which is an emerging device authentication for wireless security. In particular, this article focuses on hardware impairment-based identity authentication and channel features-based authentication. They are passive techniques that are readily applicable to legacy IoT devices. Their intrinsic hardware and channel features, algorithm design methodologies, application scenarios, and key research questions are extensively reviewed here. The remaining research challenges are discussed, and future work is suggested that can further enhance the physical layer-based device fingerprinting.
Elliptic curve cryptography (ECC) has emerged as the dominant public-key protocol, with NIST standardizing parameters for binary field GF(2^m) ECC systems. This work presents a hardware implementation of a Hybrid Multiplication technique for modular multiplication over binary field GF(2m), targeting NIST B-163, 233, 283, and 571 parameters. The design optimizes the combination of conventional multiplication (CM) and Karatsuba multiplication (KM) to enhance elliptic curve point multiplication (ECPM). The key innovation uses CM for smaller operands (up to 41 bits for m=163) and KM for larger ones, reducing computational complexity and enhancing efficiency. The design is evaluated in three areas: Resource Utilization For m=163, the hybrid design uses 6,812 LUTs, a 39.82% reduction compared to conventional methods. For m=233, LUT usage reduces by 45.53% and 70.70% compared to overlap-free and bit-parallel implementations. Delay Performance For m=163, achieves 13.31ns delay, improving by 37.60% over bit-parallel implementations. For m=233, maintains 13.39ns delay. Area-Delay Product For m=163, achieves ADP of 90,860, outperforming bit-parallel (75,337) and digit-serial (43,179) implementations. For m=233, demonstrates 16.86% improvement over overlap-free and 96.10% over bit-parallel designs. Results show the hybrid technique significantly improves speed, hardware efficiency, and resource utilization for ECC cryptographic systems.
The advent of Open Radio Access Networks (O-RAN) introduces modularity and flexibility into 5G deployments but also surfaces novel security challenges across disaggregated interfaces. This literature review synthesizes recent research across thirteen academic and industry sources, examining vulnerabilities such as cipher bidding-down attacks, partial encryption exposure on control/user planes, and performance trade-offs in securing O-RAN interfaces like E2 and O1. The paper surveys key cryptographic tools -- SNOW-V, AES-256, and ZUC-256 -- evaluating their throughput, side-channel resilience, and adaptability to heterogeneous slices (eMBB, URLLC, mMTC). Emphasis is placed on emerging testbeds and AI-driven controllers that facilitate dynamic orchestration, anomaly detection, and secure configuration. We conclude by outlining future research directions, including hardware offloading, cross-layer cipher adaptation, and alignment with 3GPP TS 33.501 and O-RAN Alliance security mandates, all of which point toward the need for integrated, zero-trust architectures in 6G.
Power Side-Channel (PSC) attacks exploit power consumption patterns to extract sensitive information, posing risks to cryptographic operations crucial for secure systems. Traditional countermeasures, such as masking, face challenges including complex integration during synthesis, substantial area overhead, and susceptibility to optimization removal during logic synthesis. To address these issues, we introduce PoSyn, a novel logic synthesis framework designed to enhance cryptographic hardware resistance against PSC attacks. Our method centers on optimal bipartite mapping of vulnerable RTL components to standard cells from the technology library, aiming to minimize PSC leakage. By utilizing a cost function integrating critical characteristics from both the RTL design and the standard cell library, we strategically modify mapping criteria during RTL-to-netlist conversion without altering design functionality. Furthermore, we theoretically establish that PoSyn minimizes mutual information leakage, strengthening its security against PSC vulnerabilities. We evaluate PoSyn across various cryptographic hardware implementations, including AES, RSA, PRESENT, and post-quantum cryptographic algorithms such as Saber and CRYSTALS-Kyber, at technology nodes of 65nm, 45nm, and 15nm. Experimental results demonstrate a substantial reduction in success rates for Differential Power Analysis (DPA) and Correlation Power Analysis (CPA) attacks, achieving lows of 3% and 6%, respectively. TVLA analysis further confirms that synthesized netlists exhibit negligible leakage. Additionally, compared to conventional countermeasures like masking and shuffling, PoSyn significantly lowers attack success rates, achieving reductions of up to 72%, while simultaneously enhancing area efficiency by as much as 3.79 times.
We present JavelinGuard, a suite of low-cost, high-performance model architectures designed for detecting malicious intent in Large Language Model (LLM) interactions, optimized specifically for production deployment. Recent advances in transformer architectures, including compact BERT(Devlin et al. 2019) variants (e.g., ModernBERT (Warner et al. 2024)), allow us to build highly accurate classifiers with as few as approximately 400M parameters that achieve rapid inference speeds even on standard CPU hardware. We systematically explore five progressively sophisticated transformer-based architectures: Sharanga (baseline transformer classifier), Mahendra (enhanced attention-weighted pooling with deeper heads), Vaishnava and Ashwina (hybrid neural ensemble architectures), and Raudra (an advanced multi-task framework with specialized loss functions). Our models are rigorously benchmarked across nine diverse adversarial datasets, including popular sets like the NotInject series, BIPIA, Garak, ImprovedLLM, ToxicChat, WildGuard, and our newly introduced JavelinBench, specifically crafted to test generalization on challenging borderline and hard-negative cases. Additionally, we compare our architectures against leading open-source guardrail models as well as large decoder-only LLMs such as gpt-4o, demonstrating superior cost-performance trade-offs in terms of accuracy, and latency. Our findings reveal that while Raudra's multi-task design offers the most robust performance overall, each architecture presents unique trade-offs in speed, interpretability, and resource requirements, guiding practitioners in selecting the optimal balance of complexity and efficiency for real-world LLM security applications.
Inter-VM RowHammer is an attack that induces a bitflip beyond the boundaries of virtual machines (VMs) to compromise a VM from another, and some software-based techniques have been proposed to mitigate this attack. Evaluating these mitigation techniques requires to confirm that they actually mitigate inter-VM RowHammer in low overhead. A challenge in this evaluation process is that both the mitigation ability and the overhead depend on the underlying hardware whose DRAM address mappings are different from machine to machine. This makes comprehensive evaluation prohibitively costly or even implausible as no machine that has a specific DRAM address mapping might be available. To tackle this challenge, we propose a simulation-based framework to evaluate software-based inter-VM RowHammer mitigation techniques across configurable DRAM address mappings. We demonstrate how to reproduce existing mitigation techniques on our framework, and show that it can evaluate the mitigation abilities and performance overhead of them with configurable DRAM address mappings.
In the age of IoT and mobile platforms, ensuring that content stay authentic whilst avoiding overburdening limited hardware is a key problem. This study introduces hybrid Fast Wavelet Transform & Additive Quantization index Modulation (FWT-AQIM) scheme, a lightweight watermarking approach that secures digital pictures on low-power, memory-constrained small scale devices to achieve a balanced trade-off among robustness, imperceptibility, and computational efficiency. The method embeds watermark in the luminance component of YCbCr color space using low-frequency FWT sub-bands, minimizing perceptual distortion, using additive QIM for simplicity. Both the extraction and embedding processes run in less than 40 ms and require minimum RAM when tested on a Raspberry Pi 5. Quality assessments on standard and high-resolution images yield PSNR greater than equal to 34 dB and SSIM greater than equal to 0.97, while robustness verification includes various geometric and signal-processing attacks demonstrating near-zero bit error rates and NCC greater than equal to 0.998. Using a mosaic-based watermark, redundancy added enhancing robustness without reducing throughput, which peaks at 11 MP/s. These findings show that FWT-AQIM provides an efficient, scalable solution for real-time, secure watermarking in bandwidth- and power-constrained contexts, opening the way for dependable content protection in developing IoT and multimedia applications.
The confidentiality of trained AI models on edge devices is at risk from side-channel attacks exploiting power and electromagnetic emissions. This paper proposes a novel training methodology to enhance resilience against such threats by introducing randomized and interchangeable model configurations during inference. Experimental results on Google Coral Edge TPU show a reduction in side-channel leakage and a slower increase in t-scores over 20,000 traces, demonstrating robustness against adversarial observations. The defense maintains high accuracy, with about 1% degradation in most configurations, and requires no additional hardware or software changes, making it the only applicable solution for existing Edge TPUs.
Extended reality (XR) systems, which consist of virtual reality (VR), augmented reality (AR), and mixed reality (XR), offer a transformative interface for immersive, multi-modal, and embodied human-computer interaction. In this paper, we envision that multi-modal multi-task (M3T) federated foundation models (FedFMs) can offer transformative capabilities for XR systems through integrating the representational strength of M3T foundation models (FMs) with the privacy-preserving model training principles of federated learning (FL). We present a modular architecture for FedFMs, which entails different coordination paradigms for model training and aggregations. Central to our vision is the codification of XR challenges that affect the implementation of FedFMs under the SHIFT dimensions: (1) Sensor and modality diversity, (2) Hardware heterogeneity and system-level constraints, (3) Interactivity and embodied personalization, (4) Functional/task variability, and (5) Temporality and environmental variability. We illustrate the manifestation of these dimensions across a set of emerging and anticipated applications of XR systems. Finally, we propose evaluation metrics, dataset requirements, and design tradeoffs necessary for the development of resource-aware FedFMs in XR. This perspective aims to chart the technical and conceptual foundations for context-aware privacy-preserving intelligence in the next generation of XR systems.
Confidential computing has gained traction across major architectures with Intel TDX, AMD SEV-SNP, and Arm CCA. Unlike TDX and SEV-SNP, a key challenge in researching Arm CCA is the absence of hardware support, forcing researchers to develop ad-hoc performance prototypes on non-CCA Arm boards. This approach leads to duplicated efforts, inconsistent performance comparisons, and high barriers to entry. To address this, we present OpenCCA, an open research platform that enables the execution of CCA-bound code on commodity Armv8.2 hardware. By systematically adapting the software stack -- including bootloader, firmware, hypervisor, and kernel -- OpenCCA emulates CCA operations for performance evaluation while preserving functional correctness. We demonstrate its effectiveness with typical life-cycle measurements and case-studies inspired by prior CCA-based papers on a easily available Armv8.2 Rockchip board that costs $250.
Frontier AI models pose increasing risks to public safety and international security, creating a pressing need for AI developers to provide credible guarantees about their development activities without compromising proprietary information. We propose Flexible Hardware-Enabled Guarantees (flexHEG), a system integrated with AI accelerator hardware to enable verifiable claims about compute usage in AI development. The flexHEG system consists of two primary components: an auditable Guarantee Processor that monitors accelerator usage and verifies compliance with specified rules, and a Secure Enclosure that provides physical tamper protection. In this report, we analyze technical implementation options ranging from firmware modifications to custom hardware approaches, with focus on an "Interlock" design that provides the Guarantee Processor direct access to accelerator data paths. Our proposed architecture could support various guarantee types, from basic usage auditing to sophisticated automated verification. This work establishes technical foundations for hardware-based AI governance mechanisms that could be deployed by 2027 to address emerging regulatory and international security needs in frontier AI development.
Large Language Models (LLMs) offer transformative capabilities for hardware design automation, particularly in Verilog code generation. However, they also pose significant data security challenges, including Verilog evaluation data contamination, intellectual property (IP) design leakage, and the risk of malicious Verilog generation. We introduce SALAD, a comprehensive assessment that leverages machine unlearning to mitigate these threats. Our approach enables the selective removal of contaminated benchmarks, sensitive IP and design artifacts, or malicious code patterns from pre-trained LLMs, all without requiring full retraining. Through detailed case studies, we demonstrate how machine unlearning techniques effectively reduce data security risks in LLM-aided hardware design.
As hardware design complexity increases, hardware fuzzing emerges as a promising tool for automating the verification process. However, a significant gap still exists before it can be applied in industry. This paper aims to summarize the current progress of hardware fuzzing from an industry-use perspective and propose solutions to bridge the gap between hardware fuzzing and industrial verification. First, we review recent hardware fuzzing methods and analyze their compatibilities with industrial verification. We establish criteria to assess whether a hardware fuzzing approach is compatible. Second, we examine whether current verification tools can efficiently support hardware fuzzing. We identify the bottlenecks in hardware fuzzing performance caused by insufficient support from the industrial environment. To overcome the bottlenecks, we propose a prototype, HwFuzzEnv, providing the necessary support for hardware fuzzing. With this prototype, the previous hardware fuzzing method can achieve a several hundred times speedup in industrial settings. Our work could serve as a reference for EDA companies, encouraging them to enhance their tools to support hardware fuzzing efficiently in industrial verification.
Self-Sovereign Identity (SSI) is a novel identity model that empowers individuals with full control over their data, enabling them to choose what information to disclose, with whom, and when. This paradigm is rapidly gaining traction worldwide, supported by numerous initiatives such as the European Digital Identity (EUDI) Regulation or Singapore's National Digital Identity (NDI). For instance, by 2026, the EUDI Regulation will enable all European citizens to seamlessly access services across Europe using Verifiable Credentials (VCs). A key feature of SSI is the ability to selectively disclose only specific claims within a credential, enhancing privacy protection of the identity owner. This paper proposes a novel mechanism designed to achieve Compact and Selective Disclosure for VCs (CSD-JWT). Our method leverages a cryptographic accumulator to encode claims within a credential to a unique, compact representation. We implemented CSD-JWT as an open-source solution and extensively evaluated its performance under various conditions. CSD-JWT provides significant memory savings, reducing usage by up to 46% compared to the state-of-the-art. It also minimizes network overhead by producing remarkably smaller Verifiable Presentations (VPs), reduced in size by 27% to 93%. Such features make CSD-JWT especially well-suited for resource-constrained devices, including hardware wallets designed for managing credentials.
The current landscape of system-on-chips (SoCs) security verification faces challenges due to manual, labor-intensive, and inflexible methodologies. These issues limit the scalability and effectiveness of security protocols, making bug detection at the Register-Transfer Level (RTL) difficult. This paper proposes a new framework named BugWhisperer that utilizes a specialized, fine-tuned Large Language Model (LLM) to address these challenges. By enhancing the LLM's hardware security knowledge and leveraging its capabilities for text inference and knowledge transfer, this approach automates and improves the adaptability and reusability of the verification process. We introduce an open-source, fine-tuned LLM specifically designed for detecting security vulnerabilities in SoC designs. Our findings demonstrate that this tailored LLM effectively enhances the efficiency and flexibility of the security verification process. Additionally, we introduce a comprehensive hardware vulnerability database that supports this work and will further assist the research community in enhancing the security verification process.