Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
In next-generation wireless systems, providing location-based mobile computing services for energy-neutral devices has become a crucial objective for the provision of sustainable Internet of Things (IoT). Visible light positioning (VLP) has gained great research attention as a complementary method to radio frequency (RF) solutions since it can leverage ubiquitous lighting infrastructure. However, conventional VLP receivers often rely on photodetectors or cameras that are power-hungry, complex, and expensive. To address this challenge, we propose a hybrid indoor asset tracking system that integrates visible light communication (VLC) and backscatter communication (BC) within a simultaneous lightwave information and power transfer (SLIPT) framework. We design a low-complexity and energy-neutral IoT node, namely backscatter device (BD) which harvests energy from light-emitting diode (LED) access points, and then modulates and reflects ambient RF carriers to indicate its location within particular VLC cells. We present a multi-cell VLC deployment with frequency division multiplexing (FDM) method that mitigates interference among LED access points by assigning them distinct frequency pairs based on a four-color map scheduling principle. We develop a lightweight particle filter (PF) tracking algorithm at an edge RF reader, where the fusion of proximity reports and the received backscatter signal strength are employed to track the BD. Experimental results show that this approach achieves the positioning error of 0.318 m at 50th percentile and 0.634 m at 90th percentile, while avoiding the use of complex photodetectors and active RF synthesizing components at the energy-neutral IoT node. By demonstrating robust performance in multiple indoor trajectories, the proposed solution enables scalable, cost-effective, and energy-neutral indoor tracking for pervasive and edge-assisted IoT applications.
Neuromorphic computing demands synaptic elements that can store and update weights with high precision while being read non-destructively. Conventional ferroelectric synapses store weights in remnant polarization states and might require destructive electrical readout, limiting endurance and reliability. We demonstrate a ferroelectric MEMS (FeMEMS) based synapse in which analog weights are stored in the piezoelectric coefficient $d_{31,eff}$ of a released Hf$_{0.5}$Zr$_{0.5}$O$_2$ (HZO) MEMS unimorph. Partial switching of ferroelectric domains modulates $d_{31,eff}$, and a low-amplitude mechanical drive reads out the weight without read-disturb in the device yielding more than 7-bit of programming levels. The mechanical switching distribution function follows a Lorentzian distribution as a logarithmic function of partial poling voltage ($V_p$) consistent with nucleation-limited switching (NLS), and the median threshold extracted from electromechanical data obeys a Merz-type field-time law with a dimensionless exponent $\alpha = 3.62$. These relationships establish a quantitative link between mechanical weights and electrical switching kinetics. This mechanically read synapse avoids depolarization and charge-injection effects, provides bipolar weights (well suited for excitatory and inhibitory synapses), directly reveals partial domain populations, and offers a robust, energy-efficient route toward high-bit neuromorphic hardware.
This paper presents an electromagnetic investigation of the crosstalk between two bent microstrip lines (MLs) separated by a perforated planar shield. As an extension of our previous study, the effects of various discontinuities in either the MLs or the shield along the coupling path are analyzed through numerical simulations and validated by measurements. The underlying electromagnetic mechanisms are also discussed. Furthermore, multimodal wave theory in a rectangular waveguide is applied to predict crosstalk behavior when the shield contains an aperture. This study aims to conceptually elucidate complex crosstalk phenomena that are difficult to model using circuit theory, and successful predictions of crosstalk behavior are presented for different problem cases.
We study the problem of adding native pulse-level control to heterogeneous High Performance Computing-Quantum Computing (HPCQC) software stacks, using the Munich Quantum Software Stack (MQSS) as a case study. The goal is to expand the capabilities of HPCQC environments by offering the ability for low-level access and control, currently typically not foreseen for such hybrid systems. For this, we need to establish new interfaces that integrate such pulse-level control into the lower layers of the software stack, including the need for proper representation. Pulse-level quantum programs can be fully described with only three low-level abstractions: ports (input/output channels), frames (reference signals), and waveforms (pulse envelopes). We identify four key challenges to represent those pulse abstractions at: the user-interface level, at the compiler level (including the Intermediate Representation (IR)), and at the backend-interface level (including the appropriate exchange format). For each challenge, we propose concrete solutions in the context of MQSS. These include introducing a compiled (C/C++) pulse Application Programming Interface (API) to overcome Python runtime overhead, extending its LLVM support to include pulse-related instructions, using its C-based backend interface to query relevant hardware constraints, and designing a portable exchange format for pulse sequences. Our integrated approach provides an end-to-end path for pulse-aware compilation and runtime execution in HPCQC environments. This work lays out the architectural blueprint for extending HPCQC integration to support pulse-level quantum operations without disrupting state-of-the-art classical workflows.
Generative AI (GenAI) is rapidly transforming software engineering (SE) practices, influencing how SE processes are executed, as well as how software systems are developed, operated, and evolved. This paper applies design science research to build a roadmap for GenAI-augmented SE. The process consists of three cycles that incrementally integrate multiple sources of evidence, including collaborative discussions from the FSE 2025 "Software Engineering 2030" workshop, rapid literature reviews, and external feedback sessions involving peers. McLuhan's tetrads were used as a conceptual instrument to systematically capture the transforming effects of GenAI on SE processes and software products.The resulting roadmap identifies four fundamental forms of GenAI augmentation in SE and systematically characterizes their related research challenges and opportunities. These insights are then consolidated into a set of future research directions. By grounding the roadmap in a rigorous multi-cycle process and cross-validating it among independent author teams and peers, the study provides a transparent and reproducible foundation for analyzing how GenAI affects SE processes, methods and tools, and for framing future research within this rapidly evolving area. Based on these findings, the article finally makes ten predictions for SE in the year 2030.
Generating structurally valid and behaviorally diverse synthetic event logs for interaction-aware models is a challenging yet crucial problem, particularly in settings with limited or privacy constrained user data. Existing methods such as heuristic simulations and LLM based generators often lack structural coherence or controllability, producing synthetic data that fails to accurately represent real world system interactions. This paper presents a framework that integrates Finite State Machines or FSMs with Generative Flow Networks or GFlowNets to generate structured, semantically valid, and diverse synthetic event logs. Our FSM-constrained GFlowNet ensures syntactic validity and behavioral variation through dynamic action masking and guided sampling. The FSM, derived from expert traces, encodes domain-specific rules, while the GFlowNet is trained using a flow matching objective with a hybrid reward balancing FSM compliance and statistical fidelity. We instantiate the framework in the context of UI interaction logs using the UIC HCI dataset, but the approach generalizes to any symbolic sequence domain. Experimental results based on distributional metrics show that our FSM GFlowNet produces realistic, structurally consistent logs, achieving, for instance, under the real user logs baseline, a KL divergence of 0.2769 and Chi squared distance of 0.3522, significantly outperforming GPT-4o's 2.5294/13.8020 and Gemini's 3.7233/63.0355, alongside a leading bigram overlap of 0.1214 vs. GPT 4o's 0.0028 and Gemini's 0.0007. A downstream use case intent classification demonstrates that classifiers trained solely on our synthetic logs produced from FSM-GFlowNet achieve competitive accuracy compared to real data.
This paper introduces the Agentic AI Governance Assurance & Trust Engine (AAGATE), a Kubernetes-native control plane designed to address the unique security and governance challenges posed by autonomous, language-model-driven agents in production. Recognizing the limitations of traditional Application Security (AppSec) tooling for improvisational, machine-speed systems, AAGATE operationalizes the NIST AI Risk Management Framework (AI RMF). It integrates specialized security frameworks for each RMF function: the Agentic AI Threat Modeling MAESTRO framework for Map, a hybrid of OWASP's AIVSS and SEI's SSVC for Measure, and the Cloud Security Alliance's Agentic AI Red Teaming Guide for Manage. By incorporating a zero-trust service mesh, an explainable policy engine, behavioral analytics, and decentralized accountability hooks, AAGATE provides a continuous, verifiable governance solution for agentic AI, enabling safe, accountable, and scalable deployment. The framework is further extended with DIRF for digital identity rights, LPCI defenses for logic-layer injection, and QSAF monitors for cognitive degradation, ensuring governance spans systemic, adversarial, and ethical risks.
Molecular communication (MC) enables information exchange through the transmission of signaling molecules (SMs) and holds promise for many innovative applications. However, most existing MC studies rely on simplified transmitter (TX) models that do not account for the physical and biochemical limitations of realistic biological hardware. This work extends previous efforts toward developing models for practical MC systems by proposing a more realistic TX model that incorporates the delay in SM release and TX noise introduced by biological components. Building on this more realistic, functionalized vesicle-based TX model, we propose two novel modulation schemes specifically designed for this TX to mitigate TX-induced memory effects that arise from delayed and imperfectly controllable SM release. The proposed modulation schemes enable low-complexity receiver designs by mitigating memory effects directly at the TX. Numerical evaluations demonstrate that the proposed schemes improve communication reliability under realistic biochemical constraints, offering an important step toward physically realizable MC systems.
Ferroelectric-based capacitive crossbar arrays have been proposed for energy-efficient in-memory computing in the charge domain. They combat the challenges like sneak paths and high static power faced by resistive crossbar arrays but are susceptible to thermal noise limiting the effective number of bits (ENOB) for the weighted sum. A direct way to reduce this thermal noise is by lowering the temperature as thermal noise is proportional to temperature. In this work, we first characterize the non-volatile capacitors (nvCaps) on a foundry 28 nm platform at cryogenic temperatures to evaluate the memory window, ON state retention as a function of temperature down to 77K, and then use the calibrated device models to simulate the capacitive crossbar arrays in SPICE at lower temperatures to demonstrate higher ENOB (~5 bits) for 128x128 multiple-and-accumulate (MAC) operations.
In this paper, we demonstrate how the physics of entropy production, when combined with symmetry constraints, can be used for implementing high-performance and energy-efficient analog computing systems. At the core of the proposed framework is a generalized maximum-entropy principle that can describe the evolution of a mesoscopic physical system formed by an interconnected ensemble of analog elements, including devices that can be readily fabricated on standard integrated circuit technology. We show that the maximum-entropy state of this ensemble corresponds to a margin-propagation (MP) distribution and can be used for computing correlations and inner products as the ensemble's macroscopic properties. Furthermore, the limits of computational throughput and energy efficiency can be pushed by extending the framework to non-equilibrium or transient operating conditions, which we demonstrate using a proof-of-concept radio-frequency (RF) correlator integrated circuit fabricated in a 22 nm SOI CMOS process. The measured results show a compute efficiency greater than 2 Peta ($10^{15}$) Bit Operations per second per Watt (PetaOPS/W) at 8-bit precision and greater than 0.8 Exa ($10^{18}$) Bit Operations per second per Watt (ExaOPS/W) at 3-bit precision for RF data sampled at rates greater than 4 GS/s. Using the fabricated prototypes, we also showcase several real-world RF applications at the edge, including spectrum sensing, and code-domain communications.
The deployment of AI on edge computing devices faces significant challenges related to energy consumption and functionality. These devices could greatly benefit from brain-inspired learning mechanisms, allowing for real-time adaptation while using low-power. In-memory computing with nanoscale resistive memories may play a crucial role in enabling the execution of AI workloads on these edge devices. In this study, we introduce voltage-dependent synaptic plasticity (VDSP) as an efficient approach for unsupervised and local learning in memristive synapses based on Hebbian principles. This method enables online learning without requiring complex pulse-shaping circuits typically necessary for spike-timing-dependent plasticity (STDP). We show how VDSP can be advantageously adapted to three types of memristive devices (TiO$_2$, HfO$_2$-based metal-oxide filamentary synapses, and HfZrO$_4$-based ferroelectric tunnel junctions (FTJ)) with disctinctive switching characteristics. System-level simulations of spiking neural networks incorporating these devices were conducted to validate unsupervised learning on MNIST-based pattern recognition tasks, achieving state-of-the-art performance. The results demonstrated over 83% accuracy across all devices using 200 neurons. Additionally, we assessed the impact of device variability, such as switching thresholds and ratios between high and low resistance state levels, and proposed mitigation strategies to enhance robustness.
Soft robotics are increasingly favoured in specific applications such as healthcare, due to their adaptability, which stems from the non-linear properties of their building materials. However, these properties also pose significant challenges in designing the morphologies and controllers of soft robots. The relatively short history of this field has not yet produced sufficient knowledge to consistently derive optimal solutions. Consequently, an automated process for the design of soft robot morphologies can be extremely helpful. This study focusses on the cooperative NeuroCoEvolution of networks that are indirect representations of soft robot actuators. Both the morphologies and controllers represented by Compositional Pattern Producing Networks are evolved using the well-established method NeuroEvolution of Augmented Topologies (CPPN-NEAT). The CoEvolution of controllers and morphologies is implemented using the top n individuals from the cooperating population, with various averaging methods tested to determine the fitness of the evaluated individuals. The test-case application for this research is the optimisation of a soft actuator for a drug delivery system. The primary metric used is the maximum displacement of one end of the actuator in a specified direction. Additionally, the robustness of the evolved morphologies is assessed against a range of randomly generated controllers to simulate potential noise in real-world applications. The results of this investigation indicate that CPPN-NEAT produces superior morphologies compared to previously published results from multi-objective optimisation, with reduced computational effort and time. Moreover, the best configuration is found to be CoEvolution with the two best individuals from the cooperative population and the averaging of their fitness using the weighted mean method.
The 2019-2020 Black Summer bushfires in Australia devastated 19 million hectares, destroyed 3,000 homes, and lasted seven months, demonstrating the escalating scale and urgency of wildfire threats requiring better forecasting for effective response. Traditional fire modeling relies on manual interpretation by Fire Behaviour Analysts (FBAns) and static environmental data, often leading to inaccuracies and operational limitations. Emerging data sources, such as NASA's FIRMS satellite imagery and Volunteered Geographic Information, offer potential improvements by enabling dynamic fire spread prediction. This study proposes a Multimodal Fire Spread Prediction Framework (MFiSP) that integrates social media data and remote sensing observations to enhance forecast accuracy. By adapting fuel map manipulation strategies between assimilation cycles, the framework dynamically adjusts fire behavior predictions to align with the observed rate of spread. We evaluate the efficacy of MFiSP using synthetically generated fire event polygons across multiple scenarios, analyzing individual and combined impacts on forecast perimeters. Results suggest that our MFiSP integrating multimodal data can improve fire spread prediction beyond conventional methods reliant on FBAn expertise and static inputs.
The sixth generation (6G) targets ultra reliable, low latency (URLLC) gigabit connectivity in mmWave bands, where directional channels require precise beam alignment. Reconfigurable intelligent surfaces (RIS) reshape wave propagation and extend coverage, but they enlarge the beam search space at the base station, making exhaustive sweeps inefficient due to control overhead and latency. We propose an ML based user localization framework for RIS assisted communication at 27 GHz. A 20x20 RIS reflects signals from a core network connected base station and sweeps beams across the 0-90 degree elevation plane, divided into four angular sectors. We build a dataset by recording received signal power (Pr in dBm) across user locations and train multiple regressors, including decision tree (DT), support vector regressor (SVR), k nearest neighbor (KNN), XGBoost, gradient boosting, and random forest. In operation, an unknown user in the same plane measures four received power values (one per sector) and reports them to the pretrained RIS controller, which predicts the user's angular position in real time. Evaluation using mean absolute error (MAE), root mean squared error (RMSE), and R squared (R2) shows high accuracy. The DT model achieves an MAE of 4.8 degrees with R2 = 0.96, while other models reach 70 to 86 percent. Predicted radiation patterns, including main lobe alignment between 52 and 55 degrees, closely track ground truth. The framework reduces beam probing, enables faster alignment, and lowers latency for RIS assisted 6G networks.
Spin glass systems as lattices of disordered magnets with random interactions have important implications within the theory of magnetization and applications to a wide-range of hard combinatorial optimization problems. Nevertheless, despite sustained efforts, algorithms that attain both high accuracy and efficiency remain elusive. Due to their topologies being low-$k$-partite such systems are well suited to a probabilistic computing (PC) approach using probabilistic bits (P-bits). Here we present complex spin glass topologies solved on a simulated PC realization of an Ising machine. First, we considered a number of three dimensional Edwards-Anderson cubic spin-glasses randomly generated as well as found in the literature as a benchmark. Second, biclique topologies were identified as a likely candidate for a comparative advantage compared to other state-of-the-art techniques, with a range of sizes simulated. We find that the number of iterations necessary to find solutions of a given quality has constant scaling with system size past a saturation point if one assumes perfect parallelization of the hardware. Therefore a PC architecture can trade the computational depth of other methods for parallelized width by connecting a number of P-bits that scales linearly in system size. This constant scaling is shown to persist across a number of solution qualities, up to a certain limit beyond which resource constraints limited further investigation. The saturation point varies between topologies and qualities and becomes exponentially hard in the limit of finding the ground truth. Furthermore we demonstrate that our PC architecture can solve spin-glass topologies to the same quality as the most advanced quantum annealer in minutes, making modest assumptions about their implementation on hardware.
Data pipelines are essential in stream processing as they enable the efficient collection, processing, and delivery of real-time data, supporting rapid data analysis. In this paper, we present AutoStreamPipe, a novel framework that employs Large Language Models (LLMs) to automate the design, generation, and deployment of stream processing pipelines. AutoStreamPipe bridges the semantic gap between high-level user intent and platform-specific implementations across distributed stream processing systems for structured multi-agent reasoning by integrating a Hypergraph of Thoughts (HGoT) as an extended version of GoT. AutoStreamPipe combines resilient execution strategies, advanced query analysis, and HGoT to deliver pipelines with good accuracy. Experimental evaluations on diverse pipelines demonstrate that AutoStreamPipe significantly reduces development time (x6.3) and error rates (x5.19), as measured by a novel Error-Free Score (EFS), compared to LLM code-generation methods.
Parameter monitoring and control systems are crucial in the industry as they enable automation processes that improve productivity and resource optimization. These improvements also help to manage environmental factors and the complex interactions between multiple inputs and outputs required for production management. This paper proposes an automation system for broiler management based on a simulation scenario that involves sensor networks and embedded systems. The aim is to create a transmission network for monitoring and controlling broiler temperature and feeding using the Internet of Things (IoT), complemented by a dashboard and a cloud-based service database to track improvements in broiler management. We look forward this work will serve as a guide for stakeholders and entrepreneurs in the animal production industry, fostering sustainable development through simple and cost-effective automation solutions. The goal is for them to scale and integrate these recommendations into their existing operations, leading to more efficient decision-making at the management level.
A heterogeneous memory has a single address space with fast access to some addresses (a fast tier of DRAM) and slow access to other addresses (a capacity tier of CXL-attached memory or NVM). A tiered memory system aims to maximize the number of accesses to the fast tier via page migrations between the fast and capacity tiers. Unfortunately, previous tiered memory systems can perform poorly due to (1) allocating hot and cold objects in the same page and (2) abrupt changes in hotness measurements that lead to thrashing. This paper presents Jenga, a tiered memory system that addresses both problems. Jenga's memory allocator uses a novel context-based page allocation strategy. Jenga's accurate measurements of page hotness enable it to react to memory access behavior changes in a timely manner while avoiding thrashing. Compared to the best previous tiered memory system, Jenga runs memory-intensive applications 28% faster across 10 applications, when the fast tier capacity matches the working set size, at a CPU overhead of <3% of a single core and a memory overhead of <0.3%