Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Reconfigurable Intelligent Surfaces (RISs) transform the wireless environment by modifying the amplitude, phase, and polarization of incoming waves, significantly improving coverage performance. Notably, optimizing the deployment of RISs becomes vital, but existing optimization methods face challenges such as high computational complexity, limited adaptability to changing environments, and a tendency to converge on local optima. In this paper, we propose to optimize the deployment of large-scale 3D RISs using a diffusion model based on probabilistic generative learning. We begin by dividing the target area into fixed grids, with each grid corresponding to a potential deployment location. Then, a multi-RIS deployment optimization problem is formulated, which is difficult to solve directly. By treating RIS deployment as a conditional generation task, the well-trained diffusion model can generate the distribution of deployment strategies, and thus, the optimal deployment strategy can be obtained by sampling from this distribution. Simulation results demonstrate that the proposed diffusion-based method outperforms traditional benchmark approaches in terms of exceed ratio and generalization.
Digital twins have been introduced as supporters to city operations, yet existing scene-descriptor formats and digital twin platforms often lack the integration, federation, and adaptable connectivity that urban environments demand. Modern digital twin platforms decouple data streams and representations into separate architectural planes, fusing them only at the visualization layer and limiting potential for simulation or further processing of the combined assets. At the same time, geometry-centric file standards for digital twin description, and services built on top of them, focus primarily on explicitly declaring geometry and additional structural or photorealistic parameters, making integration with evolving context information a complicated process while limiting compatibility with newer representation methods. Additionally, multi-provider federation, critical in smart city services where multiple stakeholders may control distinct infrastructure or representation assets, is sparsely supported. Consequently, most pilots isolate context and representation, fusing them per use case with ad hoc components and custom description files or glue code, which hinders interoperability. To address these gaps, this paper proposes a novel concept, the 'Digital Twin Descriptor Service (DTDS)' that fuses abstracted references to geometry assets and context information within a single, extensible descriptor service through NGSI-LD. The proposed DTDS provides dynamic and federated integration of context data, representations, and runtime synchronization across heterogeneous engines and simulators. This concept paper outlines the DTDS architectural components and description ontology that enable digital-twin processes in the modern smart city.
Foundational models of computation often abstract away physical hardware limitations. However, in extreme environments like In-Network Computing (INC), these limitations become inviolable laws, creating an acute trilemma among communication efficiency, bounded memory, and robust scalability. Prevailing distributed paradigms, while powerful in their intended domains, were not designed for this stringent regime and thus face fundamental challenges. This paper demonstrates that resolving this trilemma requires a shift in perspective - from seeking engineering trade-offs to deriving solutions from logical necessity. We establish a rigorous axiomatic system that formalizes these physical constraints and prove that for the broad class of computations admitting an idempotent merge operator, there exists a unique, optimal paradigm. Any system satisfying these axioms must converge to a single normal form: Self-Describing Parallel Flows (SDPF), a purely data-centric model where stateless executors process flows that carry their own control logic. We further prove this unique paradigm is convergent, Turing-complete, and minimal. In the same way that the CAP theorem established a boundary for what is impossible in distributed state management, our work provides a constructive dual: a uniqueness theorem that reveals what is \textit{inevitable} for distributed computation flows under physical law.
Next location prediction is a key task in human mobility analysis, crucial for applications like smart city resource allocation and personalized navigation services. However, existing methods face two significant challenges: first, they fail to address the dynamic imbalance between periodic and chaotic mobile patterns, leading to inadequate adaptation over sparse trajectories; second, they underutilize contextual cues, such as temporal regularities in arrival times, which persist even in chaotic patterns and offer stronger predictability than spatial forecasts due to reduced search spaces. To tackle these challenges, we propose \textbf{\method}, a \underline{\textbf{C}}h\underline{\textbf{A}}otic \underline{\textbf{N}}eural \underline{\textbf{O}}scillator n\underline{\textbf{E}}twork for next location prediction, which introduces a biologically inspired Chaotic Neural Oscillatory Attention mechanism to inject adaptive variability into traditional attention, enabling balanced representation of evolving mobility behaviors, and employs a Tri-Pair Interaction Encoder along with a Cross Context Attentive Decoder to fuse multimodal ``who-when-where'' contexts in a joint framework for enhanced prediction performance. Extensive experiments on two real-world datasets demonstrate that CANOE consistently and significantly outperforms a sizeable collection of state-of-the-art baselines, yielding 3.17\%-13.11\% improvement over the best-performing baselines across different cases. In particular, CANOE can make robust predictions over mobility trajectories of different mobility chaotic levels. A series of ablation studies also supports our key design choices. Our code is available at: https://github.com/yuqian2003/CANOE.
With the emergence of diverse and massive data in the upcoming sixth-generation (6G) networks, the task-agnostic semantic communication system is regarded to provide robust intelligent services. In this paper, we propose a task-agnostic learnable weighted-knowledge base semantic communication (TALSC) framework for robust image transmission to address the real-world heterogeneous data bias in KB, including label flipping noise and class imbalance. The TALSC framework incorporates a sample confidence module (SCM) as meta-learner and the semantic coding networks as learners. The learners are updated based on the empirical knowledge provided by the learnable weighted-KB (LW-KB). Meanwhile, the meta-learner evaluates the significance of samples according to the task loss feedback, and adjusts the update strategy of learners to enhance the robustness in semantic recovery for unknown tasks. To strike a balance between SCM parameters and precision of significance evaluation, we design an SCM-grid extension (SCM-GE) approach by embedding the Kolmogorov-Arnold networks (KAN) within SCM, which leverages the concept of spline refinement in KAN and enables scalable SCM with customizable granularity without retraining. Simulations demonstrate that the TALSC framework effectively mitigates the effects of flipping noise and class imbalance in task-agnostic image semantic communication, achieving at least 12% higher semantic recovery accuracy (SRA) and multi-scale structural similarity (MS-SSIM) compared to state-of-the-art methods.
The rollout of 6G networks introduces unprecedented demands for autonomy, reliability, and scalability. However, the transmission of sensitive telemetry data to central servers raises concerns about privacy and bandwidth. To address this, we propose a federated edge learning framework for predictive maintenance in 6G small cell networks. The system adopts a Knowledge Defined Networking (KDN) architecture in Data, Knowledge, and Control Planes to support decentralized intelligence, telemetry-driven training, and coordinated policy enforcement. In the proposed model, each base station independently trains a failure prediction model using local telemetry metrics, including SINR, jitter, delay, and transport block size, without sharing raw data. A threshold-based multi-label encoding scheme enables the detection of concurrent fault conditions. We then conduct a comparative analysis of centralized and federated training strategies to evaluate their performance in this context. A realistic simulation environment is implemented using the ns-3 mmWave module, incorporating hybrid user placement and base station fault injection across various deployment scenarios. The learning pipeline is orchestrated via the Flower framework, and model aggregation is performed using the Federated Averaging (FedAvg) algorithm. Experimental results demonstrate that the federated model achieves performance comparable to centralized training in terms of accuracy and per-label precision, while preserving privacy and reducing communication overhead.
6th Generation (6G) mobile networks are envisioned to support several new capabilities and data-centric applications for unprecedented number of users, potentially raising significant energy efficiency and sustainability concerns. This brings focus on sustainability as one of the key objectives in the their design. To move towards sustainable solution, research and standardization community is focusing on several key issues like energy information monitoring and exposure, use of renewable energy, and use of Artificial Intelligence/Machine Learning (AI/ML) for improving the energy efficiency in 6G networks. The goal is to build energy-aware solutions that takes into account the energy information resulting in energy efficient networks. Design of energy-aware 6G networks brings in new challenges like increased overheads in gathering and exposing of energy related information, and the associated user consent management. The aim of this paper is to provide a comprehensive survey of methods used for design of energy efficient 6G networks, like energy harvesting, energy models and parameters, classification of energy-aware services, and AI/ML-based solutions. The survey also includes few use cases that demonstrate the benefits of incorporating energy awareness into network decisions. Several ongoing standardization efforts in 3GPP, ITU, and IEEE are included to provide insights into the ongoing work and highlight the opportunities for new contributions. We conclude this survey with open research problems and challenges that can be explored to make energy-aware design feasible and ensure optimality regarding performance and energy goals for 6G networks.
Delay Tolerant Networks (DTNs) are critical for emergency communication in highly dynamic and challenging scenarios characterized by intermittent connectivity, frequent disruptions, and unpredictable node mobility. While some protocols are widely adopted for simplicity and low overhead, their static replication strategy lacks the ability to adaptively distinguish high-quality relay nodes, often leading to inefficient and suboptimal message dissemination. To address this challenge, we propose a novel intelligent routing enhancement that integrates machine learning-based node evaluation into the Spray and Wait framework. Several dynamic, core features are extracted from simulation logs and are used to train multiple classifiers - Multi-Layer Perceptron (MLP), Support Vector Machine (SVM), and Random Forest (RF) - to predict whether a node is suitable as a relay under dynamic conditions. The trained models are deployed via a lightweight Flask-based RESTful API, enabling real-time, adaptive predictions. We implement the enhanced router MLPBasedSprayRouter, which selectively forwards messages based on the predicted relay quality. A caching mechanism is incorporated to reduce computational overhead and ensure stable, low-latency inference. Extensive experiments under realistic emergency mobility scenarios demonstrate that the proposed framework significantly improves delivery ratio while reducing average latency compared to the baseline protocols. Among all evaluated classifiers, MLP achieved the most robust performance, consistently outperforming both SVM and RF in terms of accuracy, adaptability, and inference speed. These results confirm the novelty and practicality of integrating machine learning into DTN routing, paving the way for resilient and intelligent communication systems in smart cities, disaster recovery, and other dynamic environments.
In recent years, sequence features such as packet length have received considerable attention due to their central role in encrypted traffic analysis. Existing sequence modeling approaches can be broadly categorized into flow-level and trace-level methods: the former suffer from high feature redundancy, limiting their discriminative power, whereas the latter preserve complete information but incur substantial computational and storage overhead. To address these limitations, we propose the \textbf{U}p-\textbf{D}own \textbf{F}low \textbf{S}equence (\textbf{UDFS}) representation, which compresses an entire trace into a two-dimensional sequence and characterizes each flow by the aggregate of its upstream and downstream traffic, reducing complexity while maintaining high discriminability. Furthermore, to address the challenge of class-specific discriminability differences, we propose an adaptive threshold mechanism that dynamically adjusts training weights and rejection boundaries, enhancing the model's classification performance. Experimental results demonstrate that the proposed method achieves superior classification performance and robustness on both coarse-grained and fine-grained datasets, as well as under concept drift and open-world scenarios. Code and Dataset are available at https://github.com/kid1999/UDFS.
Hardware acceleration in modern networks creates monitoring blind spots by offloading flows to a non-observable state, hindering real-time service degradation (SD) detection. To address this, we propose and formalize a novel inter-flow correlation framework, built on the hypothesis that observable flows can act as environmental sensors for concurrent, non-observable flows. We conduct a comprehensive statistical analysis of this inter-flow landscape, revealing a fundamental trade-off: while the potential for correlation is vast, the most explicit signals (i.e., co-occurring SD events) are sparse and rarely perfectly align. Critically, however, our analysis shows these signals frequently precede degradation in the target flow, validating the potential for timely detection. We then evaluate the framework using a standard machine learning model. While the model achieves high classification accuracy, a feature-importance analysis reveals it relies primarily on simpler intra-flow features. This key finding demonstrates that harnessing the complex contextual information requires more than simple models. Our work thus provides not only a foundational analysis of the inter-flow problem but also a clear outline for future research into the structure-aware models needed to solve it.
Beamforming techniques are utilized in millimeter wave (mmWave) communication to address the inherent path loss limitation, thereby establishing and maintaining reliable connections. However, adopting standard defined beamforming approach in highly dynamic vehicular environments often incurs high beam training overheads and reduces the available airtime for communications, which is mainly due to exchanging pilot signals and exhaustive beam measurements. To this end, we present a multi-modal sensing and fusion learning framework as a potential alternative solution to reduce such overheads. In this framework, we first extract the features individually from the visual and GPS coordinates sensing modalities by modality specific encoders, and subsequently fuse the multimodal features to obtain predicted top-k beams so that the best line-of-sight links can be proactively established. To show the generalizability of the proposed framework, we perform a comprehensive experiment in four different vehicle-to-vehicle (V2V) scenarios from real-world multi-modal sensing and communication dataset. From the experiment, we observe that the proposed framework achieves up to 77.58% accuracy on predicting top-15 beams correctly, outperforms single modalities, incurs roughly as low as 2.32 dB average power loss, and considerably reduces the beam searching space overheads by 76.56% for top-15 beams with respect to standard defined approach.
This paper presents a detailed and flexible power consumption model for Radio Units (RUs) in O-RAN using the ns3-oran simulator. This is the first ns3-oran model supporting xApp control to perform the RU power modeling. In contrast to existing frameworks like EARTH or VBS-DRX, the proposed framework is RU-centric and is parameterized by hardware-level features, such as the number of transceivers, the efficiency of the power amplifier, mmWave overheads, and standby behavior. It enables simulation-driven assessment of energy efficiency at various transmit power levels and seamlessly integrates with ns-3's energy tracking system. To help upcoming xApp-driven energy management strategies in O-RAN installations, numerical research validates the model's capacity to represent realistic nonlinear power scaling. It identifies ideal operating points for effective RU behavior.
Collaboration opportunities for devices are facilitated with Federated Learning (FL). Edge computing facilitates aggregation at edge and reduces latency. To deal with model poisoning attacks, model-based outlier detection mechanisms may not operate efficiently with hetereogenous models or in recognition of complex attacks. This paper fosters the defense line against model poisoning attack by exploiting device-level traffic analysis to anticipate the reliability of participants. FL is empowered with a topology mutation strategy, as a Moving Target Defence (MTD) strategy to dynamically change the participants in learning. Based on the adoption of recurrent neural networks for time-series analysis of traffic and a 6G wireless model, optimization framework for MTD strategy is given. A deep reinforcement mechanism is provided to optimize topology mutation in adaption with the anticipated Byzantine status of devices and the communication channel capabilities at devices. For a DDoS attack detection application and under Botnet attack at devices level, results illustrate acceptable malicious models exclusion and improvement in recognition time and accuracy.
This paper presents an agent-based simulation model for coordinating battery recharging in drone swarms, focusing on applications in Internet of Things (IoT) and Industry 4.0 environments. The proposed model includes a detailed description of the simulation methodology, system architecture, and implementation. One practical use case is explored: Smart Farming, highlighting how autonomous coordination strategies can optimize battery usage and mission efficiency in large-scale drone deployments. This work uses a machine learning technique to analyze the agent-based simulation sensitivity analysis output results.
We present a GPU-accelerated proximal message passing algorithm for large-scale network utility maximization (NUM). NUM is a fundamental problem in resource allocation, where resources are allocated across various streams in a network to maximize total utility while respecting link capacity constraints. Our method, a variant of ADMM, requires only sparse matrix-vector multiplies with the link-route matrix and element-wise proximal operator evaluations, enabling fully parallel updates across streams and links. It also supports heterogeneous utility types, including logarithmic utilities common in NUM, and does not assume strict concavity. We implement our method in PyTorch and demonstrate its performance on problems with tens of millions of variables and constraints, achieving 4x to 20x speedups over existing CPU and GPU solvers and solving problem sizes that exhaust the memory of baseline methods. Additionally, we show that our algorithm is robust to congestion and link-capacity degradation. Finally, using a time-expanded transit seat allocation case study, we illustrate how our approach yields interpretable allocations in realistic networks.
Industrial URLLC workloads-coordinated robotics, automated guided vehicles, machine-vision collaboration require sub-5 ms latency and five-nines reliability. In standardized 5G Multicast/Broadcast Services, intra-cell group traffic remains anchored in the core using MB-SMF/MB-UPF, and the Application Function. This incurs a core network path and packet delay that is avoidable when data transmitters and receivers share a cell. We propose a gNB-local multicast breakout that pivots eligible uplink flows to a downlink point-to-multipoint bearer within the gNB, while maintaining authorization, membership, and policy in the 5G core. The design specifies an eligibility policy, configured-grant uplink. 3GPP security and compliance are preserved via unchanged control-plane anchors. A latency budget and simulation indicate that removing the backhaul/UPF/AF segment reduces end-to-end latency from approximate 6.5-11.5 ms (anchored to the core) to 1.5-4.0 ms (local breakout), producing sub-2 ms averages and a stable gap approximate 10 ms between group sizes. The approach offers a practical, standards-aligned path to deterministic intra-cell group dissemination in private 5G. We outline multi-cell and prototype validation as future work.
The transition to 6G has driven significant updates to the 3GPP channel model, particularly in modeling UE antennas and user-induced blockage for handheld devices. The 3GPP Rel.19 revision of TR 38.901 introduces a more realistic framework that captures directive antenna patterns, practical antenna placements, polarization effects, and element-specific blockage. These updates are based on high-fidelity simulations and measurements of a reference smartphone across multiple frequency ranges. By aligning link- and system-level simulations with real-world device behavior, the new model enables more accurate evaluation of 6G technologies and supports consistent performance assessment across industry and research.
Quantum Key Distribution (QKD) provides information-theoretic security, but is limited by distance in optical networks, thereby requiring repeater nodes to extend coverage. Existing works usually assume all repeater nodes and associated Key Management Servers (KMSs) to be Trusted Repeater Nodes (TRNs), while ignoring risks from software exploits and insider threats. In this paper, we propose a reliability-aware TRN placement framework for metro optical networks, which assigns each node a trust score and integrates it into the Dijkstra algorithm via weighted links. We then rank the nodes using a composite score, which is a weighted combination of betweenness centrality and eigenvector centrality to enable a secure and scalable TRN deployment. Simulation results on a reference topology show that our method covers 10.77% more shortest paths compared to traditional metrics like degree centrality, using the same number (around eight) of TRNs, making it suitable for TRN selection to maximize secure connectivity.