Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Neutral atom quantum computing's great scaling potential has resulted in it emerging as a popular modality in recent years. For state preparation, atoms are loaded stochastically and have to be detected and rearranged at runtime to create a predetermined initial configuration for circuit execution. Such rearrangement schemes either suffer from low parallelizability for acousto-optic deflector (AOD)-based approaches or are comparatively slow in case of spatial light modulators (SLMs). In our work, we introduce an algorithm that can improve the parallelizability of the former. Since the transfer of atoms from static SLM traps to AOD-generated movable traps is detrimental both in terms of atom loss rates and execution time, our approach is based on highly-parallel composite moves where many atoms are picked up simultaneously and maneuvered into target positions that may be comparatively distant. We see that our algorithm outperforms its alternatives for near-term devices with up to around 1000 qubits and has the potential to scale up to several thousand with further optimizations.
Ride-sourcing platforms such as Uber and Lyft are prime examples of the gig economy, recruiting drivers as independent contractors, thereby avoiding legal and fiscal obligations. Although platforms offer flexibility in choosing work shifts and areas, many drivers experience low income and poor working conditions, leading to widespread strikes and protests. Minimum wage regulation is adopted to improve drivers welfare. However, the impacts of this regulation on drivers as well as on travelers and platforms, remain largely unknown. While ride-sourcing platforms do not disclose the relevant data, state-of-the-art models fail to explain the effects of minimum wage regulation on market dynamics. In this study, we assess the effectiveness and implications of minimum wage regulation in ride-sourcing markets while simulating the detailed dynamics of ride-sourcing markets under varying regulation intensities, both with and without the so-called platform lockout strategy. Our findings reveal that minimum wage regulation impacts substantially drivers income, and may lead to higher fares for travelers and threaten platforms survival. When platforms adopt a lockout strategy, their profitability significantly improves and drivers earn more, although many others lose their jobs, and service level for travelers consequently declines.
Wireless techniques for monitoring human vital signs, such as heart and breathing rates, offer a promising solution in the context of joint communication and sensing (JCAS) with applications in medicine, sports, safety, security, and even the military. This paper reports experimental results obtained at the Fraunhofer Institute for Integrated Circuits in Ilmenau, demonstrating the effectiveness of an indoor orthogonal frequency-division multiplexing (OFDM) JCAS system for detecting human heart and breathing rates. The system operated in a bistatic configuration at an FR2 frequency of 26.5 GHz with a variable bandwidth of up to 1 GHz. Measurements were taken under various scenarios, including a subject lying down, sitting, or walking, in both line-of-sight and non-line-of-sight conditions, and with one or two subjects present simultaneously. The results indicate that while vital sign detection is generally feasible, its effectiveness is influenced by several factors, such as the subjects clothing, activity, as well as the distance and angle relative to the sensing system. In addition, no significant influence of bandwidth was detected since the vital signs information is encoded in the phase of the signal.
Vision-Language-Action (VLA) models have emerged as powerful generalist policies for robotic control, yet their performance scaling across model architectures and hardware platforms, as well as their associated power budgets, remain poorly understood. This work presents an evaluation of five representative VLA models -- spanning state-of-the-art baselines and two newly proposed architectures -- targeting edge and datacenter GPU platforms. Using the LIBERO benchmark, we measure accuracy alongside system-level metrics, including latency, throughput, and peak memory usage, under varying edge power constraints and high-performance datacenter GPU configurations. Our results identify distinct scaling trends: (1) architectural choices, such as action tokenization and model backbone size, strongly influence throughput and memory footprint; (2) power-constrained edge devices exhibit non-linear performance degradation, with some configurations matching or exceeding older datacenter GPUs; and (3) high-throughput variants can be achieved without significant accuracy loss. These findings provide actionable insights when selecting and optimizing VLAs across a range of deployment constraints. Our work challenges current assumptions about the superiority of datacenter hardware for robotic inference.
Decentralized data-feed systems enable blockchain-based smart contracts to access off-chain information by aggregating values from multiple oracles. To improve accuracy, these systems typically use an aggregation function, such as majority voting, to consolidate the inputs they receive from oracles and make a decision. Depending on the final decision and the values reported by the oracles, the participating oracles are compensated through shared rewards. However, such incentive mechanisms are vulnerable to mirroring attacks, where a single user controls multiple oracles to bias the decision of the aggregation function and maximize rewards. This paper analyzes the impact of mirroring attacks on the reliability and dependability of majority voting-based data-feed systems. We demonstrate how existing incentive mechanisms can unintentionally encourage rational users to implement such attacks. To address this, we propose a new incentive mechanism that discourages Sybil behavior. We prove that the proposed mechanism leads to a Nash Equilibrium in which each user operates only one oracle. Finally, we discuss the practical implementation of the proposed incentive mechanism and provide numerical examples to demonstrate its effectiveness.
Beamforming techniques are utilized in millimeter wave (mmWave) communication to address the inherent path loss limitation, thereby establishing and maintaining reliable connections. However, adopting standard defined beamforming approach in highly dynamic vehicular environments often incurs high beam training overheads and reduces the available airtime for communications, which is mainly due to exchanging pilot signals and exhaustive beam measurements. To this end, we present a multi-modal sensing and fusion learning framework as a potential alternative solution to reduce such overheads. In this framework, we first extract the features individually from the visual and GPS coordinates sensing modalities by modality specific encoders, and subsequently fuse the multimodal features to obtain predicted top-k beams so that the best line-of-sight links can be proactively established. To show the generalizability of the proposed framework, we perform a comprehensive experiment in four different vehicle-to-vehicle (V2V) scenarios from real-world multi-modal sensing and communication dataset. From the experiment, we observe that the proposed framework achieves up to 77.58% accuracy on predicting top-15 beams correctly, outperforms single modalities, incurs roughly as low as 2.32 dB average power loss, and considerably reduces the beam searching space overheads by 76.56% for top-15 beams with respect to standard defined approach.
Protein-ligand binding affinity is critical in drug discovery, but experimentally determining it is time-consuming and expensive. Artificial intelligence (AI) has been used to predict binding affinity, significantly accelerating this process. However, the high-performance requirements and vast datasets involved in affinity prediction demand increasingly large AI models, requiring substantial computational resources and training time. Quantum machine learning has emerged as a promising solution to these challenges. In particular, hybrid quantum-classical models can reduce the number of parameters while maintaining or improving performance compared to classical counterparts. Despite these advantages, challenges persist: why hybrid quantum models achieve these benefits, whether quantum neural networks (QNNs) can replace classical neural networks, and whether such models are feasible on noisy intermediate-scale quantum (NISQ) devices. This study addresses these challenges by proposing a hybrid quantum neural network (HQNN) that empirically demonstrates the capability to approximate non-linear functions in the latent feature space derived from classical embedding. The primary goal of this study is to achieve a parameter-efficient model in binding affinity prediction while ensuring feasibility on NISQ devices. Numerical results indicate that HQNN achieves comparable or superior performance and parameter efficiency compared to classical neural networks, underscoring its potential as a viable replacement. This study highlights the potential of hybrid QML in computational drug discovery, offering insights into its applicability and advantages in addressing the computational challenges of protein-ligand binding affinity prediction.
The Metaverse emerges by integrating highly-distributed, complex, and interconnecting technologies. These technologies need to be formally verified and evaluated through formal modelling before executing them in real-world applications, in order to avoid negative impacts on the real world due to failure of the Metaverse technologies. However, the formal modelling of Metaverse technologies is challenging due to its highly complex nature. Therefore, a comprehensive formal verification of the Metaverse technologies is needed for its realization in multiple potential areas. In this study, a framework is proposed for the formal modelling of Metaverse technologies, which allows holistic insights for all applications of Metaverse technologies. By utilizing the proposed framework, Metaverse applications of any complexity can be modeled. The working of the proposed framework is illustrated by modelling a case study of an Air Traffic Control system. In the proposed framework, we utilize hierarchical colored Petri nets for formal modelling of behavior of the air traffic control system. The correctness of air traffic control system properties, such as liveness, reachability, and boundedness, is verified in the proposed framework. The results of the case study reveal that the proposed framework can be used as a template for mathematical verification of challenging and complex Metaverse applications. The results also show that formal modelling provides an effective tool for identifying flaws in the early phases of the design of Metaverse applications. The implication of using formal verification is that it can increase confidence about the correctness of the Metaverse applications.
The success and wide adoption of generative AI (GenAI), particularly large language models (LLMs), has attracted the attention of cybercriminals seeking to abuse models, steal sensitive data, or disrupt services. Moreover, providing security to LLM-based systems is a great challenge, as both traditional threats to software applications and threats targeting LLMs and their integration must be mitigated. In this survey, we shed light on security and privacy concerns of such LLM-based systems by performing a systematic review and comprehensive categorization of threats and defensive strategies considering the entire software and LLM life cycles. We analyze real-world scenarios with distinct characteristics of LLM usage, spanning from development to operation. In addition, threats are classified according to their severity level and to which scenarios they pertain, facilitating the identification of the most relevant threats. Recommended defense strategies are systematically categorized and mapped to the corresponding life cycle phase and possible attack strategies they attenuate. This work paves the way for consumers and vendors to understand and efficiently mitigate risks during integration of LLMs in their respective solutions or organizations. It also enables the research community to benefit from the discussion of open challenges and edge cases that may hinder the secure and privacy-preserving adoption of LLM-based systems.
Generative AI is reshaping UX design practices through "vibe coding," where UX professionals express intent in natural language and AI translates it into functional prototypes and code. Despite rapid adoption, little research has examined how vibe coding reconfigures UX workflows and collaboration. Drawing on interviews with 20 UX professionals across enterprises, startups, and academia, we show how vibe coding follows a four-stage workflow of ideation, AI generation, debugging, and review. This accelerates iteration, supports creativity, and lowers barriers to participation. However, professionals reported challenges of code unreliability, integration, and AI over-reliance. We find tensions between efficiency-driven prototyping ("intending the right design") and reflection ("designing the right intention"), introducing new asymmetries in trust, responsibility, and social stigma within teams. Through the lens of responsible human-AI collaboration for AI-assisted UX design and development, we contribute a deeper understanding of deskilling, ownership and disclosure, and creativity safeguarding in the age of vibe coding.
Deep learning-based recommendation models (DLRMs) are widely deployed in commercial applications to enhance user experience. However, the large and sparse embedding layers in these models impose substantial memory bandwidth bottlenecks due to high memory access costs and irregular access patterns, leading to increased inference time and energy consumption. While resistive random access memory (ReRAM) based crossbars offer a fast and energy-efficient solution through in-memory embedding reduction operations, naively mapping embeddings onto crossbar arrays leads to poor crossbar utilization and thus degrades performance. We present ReCross, an efficient ReRAM-based in-memory computing (IMC) scheme designed to minimize execution time and enhance energy efficiency in DLRM embedding reduction. ReCross co-optimizes embedding access patterns and ReRAM crossbar characteristics by intelligently grouping and mapping co-occurring embeddings, replicating frequently accessed embeddings across crossbars, and dynamically selecting in-memory processing operations using a newly designed dynamic switch ADC circuit that considers runtime energy trade-offs. Experimental results demonstrate that ReCross achieves a 3.97x reduction in execution time and a 6.1x improvement in energy efficiency compared to state-of-the-art IMC approaches.
We consider analog over-the-air federated learning, where devices harvest energy from in-band and out-band radio frequency signals, with the former also causing co-channel interference (CCI). To mitigate the aggregation error, we propose an effective denoising policy that does not require channel state information (CSI). We also propose an adaptive scheduling algorithm that dynamically adjusts the number of local training epochs based on available energy, enhancing device participation and learning performance while reducing energy consumption. Simulation results and convergence analysis confirm the robust performance of the algorithm compared to conventional methods. It is shown that the performance of the proposed denoising method is comparable to that of conventional CSI-based methods. It is observed that high-power CCI severely degrades the learning performance, which can be mitigated by increasing the number of active devices, achievable via the adaptive algorithm.
Soil moisture monitoring is essential for agriculture and environmental management, yet existing methods require either invasive probes disturbing the soil or specialized equipment, limiting access to the public. We present SoilSound, an ubiquitous accessible smartphone-based acoustic sensing system that can measure soil moisture without disturbing the soil. We leverage the built-in speaker and microphone to perform a vertical scan mechanism to accurately measure moisture without any calibration. Unlike existing work that use transmissive properties, we propose an alternate model for acoustic reflections in soil based on the surface roughness effect to enable moisture sensing without disturbing the soil. The system works by sending acoustic chirps towards the soil and recording the reflections during a vertical scan, which are then processed and fed to a convolutional neural network for on-device soil moisture estimation with negligible computational, memory, or power overhead. We evaluated the system by training with curated soils in boxes in the lab and testing in the outdoor fields and show that SoilSound achieves a mean absolute error (MAE) of 2.39% across 10 different locations. Overall, the evaluation shows that SoilSound can accurately track soil moisture levels ranging from 15.9% to 34.0% across multiple soil types, environments, and users; without requiring any calibration or disturbing the soil, enabling widespread moisture monitoring for home gardeners, urban farmers, citizen scientists, and agricultural communities in resource-limited settings.
When someone sends us a thoughtful message, we naturally form judgments about their character. But what happens when that message carries a label indicating it was written with the help of AI? This paper investigates how the appearance of AI assistance affects our perceptions of message senders. Adding nuance to previous research, through two studies (N=399) featuring vignette scenarios, we find that AI-assistance labels don't necessarily make people view senders negatively. Rather, they dampen the strength of character signals in communication. We show that when someone sends a warmth-signalling message (like thanking or apologizing) without AI help, people more strongly categorize the sender as warm. At the same time, when someone sends a coldness-signalling message (like bragging or blaming) without assistance, people more confidently categorize them as cold. Interestingly, AI labels weaken both these associations: An AI-assisted apology makes the sender appear less warm than if they had written it themselves, and an AI-assisted blame makes the sender appear less cold than if they had composed it independently. This supports our signal diagnosticity explanation: messages labeled as AI-assisted are viewed as less diagnostic than messages which seem unassisted. We discuss how our findings shed light on the causal origins of previously reported observations in AI-Mediated Communication.
Road traffic accidents remain a significant global concern, with human error, particularly distracted and impaired driving, among the leading causes. This study introduces a novel driver behavior classification system that uses external observation techniques to detect indicators of distraction and impairment. The proposed framework employs advanced computer vision methodologies, including real-time object tracking, lateral displacement analysis, and lane position monitoring. The system identifies unsafe driving behaviors such as excessive lateral movement and erratic trajectory patterns by implementing the YOLO object detection model and custom lane estimation algorithms. Unlike systems reliant on inter-vehicular communication, this vision-based approach enables behavioral analysis of non-connected vehicles. Experimental evaluations on diverse video datasets demonstrate the framework's reliability and adaptability across varying road and environmental conditions.
6G must be designed to withstand, adapt to, and evolve amid prolonged, complex disruptions. Mobile networks' shift from efficiency-first to sustainability-aware has motivated this white paper to assert that resilience is a primary design goal, alongside sustainability and efficiency, encompassing technology, architecture, and economics. We promote resilience by analysing dependencies between mobile networks and other critical systems, such as energy, transport, and emergency services, and illustrate how cascading failures spread through infrastructures. We formalise resilience using the 3R framework: reliability, robustness, resilience. Subsequently, we translate this into measurable capabilities: graceful degradation, situational awareness, rapid reconfiguration, and learning-driven improvement and recovery. Architecturally, we promote edge-native and locality-aware designs, open interfaces, and programmability to enable islanded operations, fallback modes, and multi-layer diversity (radio, compute, energy, timing). Key enablers include AI-native control loops with verifiable behaviour, zero-trust security rooted in hardware and supply-chain integrity, and networking techniques that prioritise critical traffic, time-sensitive flows, and inter-domain coordination. Resilience also has a techno-economic aspect: open platforms and high-quality complementors generate ecosystem externalities that enhance resilience while opening new markets. We identify nine business-model groups and several patterns aligned with the 3R objectives, and we outline governance and standardisation. This white paper serves as an initial step and catalyst for 6G resilience. It aims to inspire researchers, professionals, government officials, and the public, providing them with the essential components to understand and shape the development of 6G resilience.
Cyber warfare has become a central element of modern conflict, especially within multi-domain operations. As both a distinct and critical domain, cyber warfare requires integrating defensive and offensive technologies into coherent strategies. While prior research has emphasized isolated tactics or fragmented technologies, a holistic understanding is essential for effective resource deployment and risk mitigation. Game theory offers a unifying framework for this purpose. It not only models attacker-defender interactions but also provides quantitative tools for equilibrium analysis, risk assessment, and strategic reasoning. Integrated with modern AI techniques, game-theoretic models enable the design and optimization of strategies across multiple levels of cyber warfare, from policy and strategy to operations, tactics, and technical implementations. These models capture the paradoxical logic of conflict, where more resources do not always translate into greater advantage, and where nonlinear dynamics govern outcomes. To illustrate the approach, this chapter examines RedCyber, a synthetic cyber conflict, demonstrating how game-theoretic methods capture the interdependencies of cyber operations. The chapter concludes with directions for future research on resilience, cros-echelon planning, and the evolving role of AI in cyber warfare.
The advent of quantum physics has revolutionized our understanding of the universe, replacing the deterministic framework of classical physics with a paradigm dominated by intrinsic randomness and quantum correlations. This shift has not only enabled groundbreaking technologies, such as quantum sensors, networks and computers, but has also unlocked entirely new possibilities for artistic expressions. In this paper, we explore the intersection of quantum mechanics and art, focusing on the use of quantum entanglement and inherent randomness as creative tools. Specifically, we present The Sound of Entanglement, a live musical performance driven by real-time measurements of entangled photons in a Bell test. By integrating the measured quantum correlations as a central compositional element and synchronizing live visuals with experimental data, the performance offers a unique and unrepeatable audiovisual experience that relies on quantum correlations which cannot be produced by any classical device. Through this fusion of science and art, we aim to provide a deeper appreciation of quantum phenomena while expanding the boundaries of creative expression.