Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Virtual dynamic environments (VDEs) such as the Metaverse and digital twins (DTs) require proper representation of the interacting entities to map their characteristics within the simulated or augmented space. Keeping these representations accurate and up-to-date is crucial for seamless interaction and system reliability. In this paper, we propose bidirectional age of incorrect information (BAoII) to address this aspect. BAoII quantifies the time-dependent penalty paid by an entity in a VDE due to incorrect or outdated knowledge about itself and the overall dynamically changing space. This extends the concept of age of incorrect information for a bidirectional information exchange, capturing that a VDE requires mutual awareness of the entity's own representation, measured in the virtual space, and what the other entities share about their representations. Using a continuous-time Markov chain model, we derive a closed-form expression for long-term BAoII and identify a transmission cost threshold for optimal update strategies. We describe a trade-off between communication cost and information freshness and validate our model through numerical simulations, demonstrating the impact of BAoII on evaluating system performance and highlighting its relevance for real-time collaboration in the Metaverse and DTs.
As 6G networks are developed and defined, offloading of XR applications is emerging as one of the strong new use cases. The reduced 6G latency coupled with edge processing infrastructure will for the first time provide a realistic offloading scenario in cellular networks where several computationally intensive functions, including rendering, can migrate from the user device and into the network. A key advantage of doing so is the lowering of the battery needs in the user devices and the possibility to design new devices with smaller form factors.
Sixth generation (6G) networks demand tight integration of artificial intelligence (AI) into radio access networks (RANs) to meet stringent quality of service (QoS) and resource efficiency requirements. Existing solutions struggle to bridge the gap between high level user intents and the low level, parameterized configurations required for optimal performance. To address this challenge, we propose RIDAS, a multi agent framework composed of representation driven agents (RDAs) and an intention driven agent (IDA). RDAs expose open interface with tunable control parameters (rank and quantization bits, enabling explicit trade) offs between distortion and transmission rate. The IDA employs a two stage planning scheme (bandwidth pre allocation and reallocation) driven by a large language model (LLM) to map user intents and system state into optimal RDA configurations. Experiments demonstrate that RIDAS supports 44.71\% more users than WirelessAgent under equivalent QoS constraints. These results validate ability of RIDAS to capture user intent and allocate resources more efficiently in AI RAN environments.
The growing demand for low-latency computing in 6G is driving the use of UAV-based low-altitude mobile edge computing (MEC) systems. However, limited spectrum often leads to severe uplink interference among ground terminals (GTs). In this paper, we investigate a rate-splitting multiple access (RSMA)-enabled low-altitude MEC system, where a UAV-based edge server assists multiple GTs in concurrently offloading their tasks over a shared uplink. We formulate a joint optimization problem involving the UAV 3D trajectory, RSMA decoding order, task offloading decisions, and resource allocation, aiming to mitigate multi-user interference and maximize energy efficiency. Given the high dimensionality, non-convex nature, and dynamic characteristics of this optimization problem, we propose a generative AI-enhanced deep reinforcement learning (DRL) framework to solve it efficiently. Specifically, we embed a diffusion model into the actor network to generate high-quality action samples, improving exploration in hybrid action spaces and avoiding local optima. In addition, a priority-based RSMA decoding strategy is designed to facilitate efficient successive interference cancellation with low complexity. Simulation results demonstrate that the proposed method for low-altitude MEC systems outperforms baseline methods, and that integrating GDM with RSMA can achieve significantly improved energy efficiency performance.
Joint source-channel coding (JSCC) is an effective approach for semantic communication. However, current JSCC methods are difficult to integrate with existing communication network architectures, where application and network providers are typically different entities. Recently, a novel paradigm termed Split DeepJSCC has been under consideration to address this challenge. Split DeepJSCC employs a bit-level interface that enables separate design of source and channel codes, ensuring compatibility with existing communication networks while preserving the advantages of JSCC in terms of semantic fidelity and channel adaptability. In this paper, we propose a learning-based interface design by treating its parameters as trainable, achieving improved end-to-end performance compared to Split DeepJSCC. In particular, the interface enables specification of bit-level importance at the output of the source code. Furthermore, we propose an Importance-Aware Net that utilizes the interface-derived bit importance information, enabling dynamical adaptation to diverse channel bandwidth ratios and time-varying channel conditions. Experimental results show that our method improves performance in wireless image transmission tasks. This work provides a potential solution for realizing semantic communications in existing wireless networks.
Reducing latency in the Internet of Things (IoT) is a critical concern. While cloud computing facilitates communication, it falls short of meeting real-time requirements reliably. Edge and fog computing have emerged as viable solutions by positioning computing nodes closer to end users, offering lower latency and increased processing power. An edge-fog framework comprises various components, including edge and fog nodes, whose strategic placement is crucial as it directly impacts latency and system cost. This paper presents an effective and tunable node placement strategy based on a genetic algorithm to address the optimization problem of deploying edge and fog nodes. The main objective is to minimize latency and cost through optimal node placement. Simulation results demonstrate that the proposed framework achieves up to 2.77% latency and 31.15% cost reduction.
Beyond hallucinations, another problem in program synthesis using LLMs is ambiguity in user intent. We illustrate the ambiguity problem in a networking context for LLM-based incremental configuration synthesis of route-maps and ACLs. These structures frequently overlap in header space, making the relative priority of actions impossible for the LLM to infer without user interaction. Measurements in a large cloud identify complex ACLs with 100's of overlaps, showing ambiguity is a real problem. We propose a prototype system, Clarify, which uses an LLM augmented with a new module called a Disambiguator that helps elicit user intent. On a small synthetic workload, Clarify incrementally synthesizes routing policies after disambiguation and then verifies them. Our treatment of ambiguities is useful more generally when the intent of updates can be correctly synthesized by LLMs, but their integration is ambiguous and can lead to different global behaviors.
Ever since Clos topologies were used in datacenter networks (DCNs), a practical centralized scheduling algorithm that supports dynamic scheduling has been absent. The introduction of optical switches in DCNs as a future-proof solution exacerbates this problem due to several properties of optical switches, such as the fact that they are generally bufferless and therefore rely on centralized scheduling, and that they have long switching times and therefore require the number of rearrangements to be minimized. In this paper, we propose a centralized scheduling algorithm that achieves theoretical maximum throughput even in one-rate bidirectional Clos networks, while producing schemes with near-minimal numbers of rearrangements. It is the only algorithm that directly supports bidirectional Clos networks and has a time efficiency high enough to support dynamic scheduling to date. For static minimal rewiring, its running time ranges from a fraction to a few hundredths of other algorithms, and the number of rearrangements has also been steadily improved, allowing for more frequent adjustments and less impact on ongoing communications. In addition, the algorithm is very flexible and can support various functional requirements in real-world environments. We achieve this result through the replacement chain concept and bitset optimization.
Task offloading in three-layer fog computing environments presents a critical challenge due to user equipment (UE) mobility, which frequently triggers costly service migrations and degrades overall system performance. This paper addresses this problem by proposing MOFCO, a novel Mobility- and Migration-aware Task Offloading algorithm for Fog Computing environments. The proposed method formulates task offloading and resource allocation as a Mixed-Integer Nonlinear Programming (MINLP) problem and employs a heuristic-aided evolutionary game theory approach to solve it efficiently. To evaluate MOFCO, we simulate mobile users using SUMO, providing realistic mobility patterns. Experimental results show that MOFCO reduces system cost, defined as a combination of latency and energy consumption, by an average of 19% and up to 43% in certain scenarios compared to state-of-the-art methods.
As the path toward 6G networks is being charted, the emerging applications have motivated evolutions of network architectures to realize the efficient, reliable, and flexible wireless networks. Among the potential architectures, the non-terrestrial network (NTN) and open radio access network (ORAN) have received increasing interest from both academia and industry. Although the deployment of NTNs ensures coverage, enhances spectral efficiency, and improves the resilience of wireless networks. The high altitude and mobility of NTN present new challenges in the development and operations (DevOps) lifecycle, hindering intelligent and scalable network management due to the lack of native artificial intelligence (AI) capability. With the advantages of ORAN in disaggregation, openness, virtualization, and intelligence, several works propose integrating ORAN principles into the NTN, focusing mainly on ORAN deployment options based on transparent and regenerative systems. However, a holistic view of how to effectively combine ORAN and NTN throughout the DevOps lifecycle is still missing, especially regarding how intelligent ORAN addresses the scalability challenges in NTN. Motivated by this, in this paper, we first provide the background knowledge about ORAN and NTN, outline the state-of-the-art research on ORAN for NTNs, and present the DevOps challenges that motivate the adoption of ORAN solutions. We then propose the ORAN-based NTN framework, discussing its features and architectures in detail. These include the discussion about flexible fronthaul split, RAN intelligent controllers (RICs) enhancement for distributed learning, scalable deployment architecture, and multi-domain service management. Finally, the future research directions, including combinations of the ORAN-based NTN framework and other enabling technologies and schemes, as well as the candidate use cases, are highlighted.
Physicists often manually consider extreme cases when testing a theory. In this paper, we show how to automate extremal testing of network software using LLMs in two steps: first, ask the LLM to generate input constraints (e.g., DNS name length limits); then ask the LLM to generate tests that violate the constraints. We demonstrate how easy this process is by generating extremal tests for HTTP, BGP and DNS implementations, each of which uncovered new bugs. We show how this methodology extends to centralized network software such as shortest path algorithms, and how LLMs can generate filtering code to reject extremal input. We propose using agentic AI to further automate extremal testing. LLM-generated extremal testing goes beyond an old technique in software testing called Boundary Value Analysis.
We analyzed spatial complexity, defined as the relationship between the required bitrate and a corresponding picture Quality of Experience (QoE) metric, for realistic, long, real-time, interactive video clips. Apart from variation across different content types, e.g., game genres, we discovered time-variability within a clip from second to second, and explored the ramifications for traffic management. We introduced utility as an elegant way to manage resource sharing preferences. Our analysis of resource sharing methods shows that frequent QoE-aware reallocation has significant performance advantages compared to static rate allocation, even in case the latter is based on rich information about long-term average spatial complexity. We have also shown that utility-based resource allocation has clear advantages over methods targeting equal QoE allocation, it increases the average QoE, while it still controls the worst case QoE.
Twelve years have passed since World IPv6 Launch Day, but what is the current state of IPv6 deployment? Prior work has examined IPv6 status as a binary: can you use IPv6, or not? As deployment increases we must consider a more nuanced, non-binary perspective on IPv6: how much and often can a user or a service use IPv6? We consider this question as a client, server, and cloud provider. Considering the client's perspective, we observe user traffic. We see that the fraction of IPv6 traffic a user sends varies greatly, both across users and day-by-day, with a standard deviation of over 15%. We show this variation occurs for two main reasons. First, IPv6 traffic is primarily human-generated, thus showing diurnal patterns. Second, some services are IPv6-forward and others IPv6-laggards, so as users do different things their fraction of IPv6 varies. We look at server-side IPv6 adoption in two ways. First, we expand analysis of web services to examine how many are only partially IPv6 enabled due to their reliance on IPv4-only resources. Our findings reveal that only 12.5% of top 100k websites qualify as fully IPv6-ready. Finally, we examine cloud support for IPv6. Although all clouds and CDNs support IPv6, we find that tenant deployment rates vary significantly across providers. We find that ease of enabling IPv6 in the cloud is correlated with tenant IPv6 adoption rates, and recommend best practices for cloud providers to improve IPv6 adoption. Our results suggest IPv6 deployment is growing, but many services lag, presenting a potential for improvement.
Federated Learning (FL) enables collaborative model training on decentralized data without exposing raw data. However, the evaluation phase in FL may leak sensitive information through shared performance metrics. In this paper, we propose a novel protocol that incorporates Zero-Knowledge Proofs (ZKPs) to enable privacy-preserving and verifiable evaluation for FL. Instead of revealing raw loss values, clients generate a succinct proof asserting that their local loss is below a predefined threshold. Our approach is implemented without reliance on external APIs, using self-contained modules for federated learning simulation, ZKP circuit design, and experimental evaluation on both the MNIST and Human Activity Recognition (HAR) datasets. We focus on a threshold-based proof for a simple Convolutional Neural Network (CNN) model (for MNIST) and a multi-layer perceptron (MLP) model (for HAR), and evaluate the approach in terms of computational overhead, communication cost, and verifiability.
Wireless networks are vulnerable to jamming attacks due to the shared communication medium, which can severely degrade performance and disrupt services. Despite extensive research, current jamming detection methods often rely on simulated data or proprietary over-the-air datasets with limited cross-layer features, failing to accurately represent the real state of a network and thus limiting their effectiveness in real-world scenarios. To address these challenges, we introduce JamShield, a dynamic jamming detection system trained on our own collected over-the-air and publicly available dataset. It utilizes hybrid feature selection to prioritize relevant features for accurate and efficient detection. Additionally, it includes an auto-classification module that dynamically adjusts the classification algorithm in real-time based on current network conditions. Our experimental results demonstrate significant improvements in detection rate, precision, and recall, along with reduced false alarms and misdetections compared to state-of-the-art detection algorithms, making JamShield a robust and reliable solution for detecting jamming attacks in real-world wireless networks.
Time-Sensitive Networking (TSN) is increasingly adopted in industrial systems to meet strict latency, jitter, and reliability requirements. However, evaluating TSN's fault tolerance under realistic failure conditions remains challenging. This paper presents IN2C, a modular OMNeT++/INET-based simulation framework that models two synchronized production cells connected to centralized infrastructure. IN2C integrates core TSN features, including time synchronization, traffic shaping, per-stream filtering, and Frame Replication and Elimination for Redundancy (FRER), alongside XML-driven fault injection for link and node failures. Four fault scenarios are evaluated to compare TSN performance with and without redundancy. Results show that FRER eliminates packet loss and achieves submillisecond recovery, though with 2-3x higher link utilization. These findings offer practical guidance for deploying TSN in bandwidth-constrained industrial environments.
Finite-State Machines (FSMs) are critical for modeling the operational logic of network protocols, enabling verification, analysis, and vulnerability discovery. However, existing FSM extraction techniques face limitations such as scalability, incomplete coverage, and ambiguity in natural language specifications. In this paper, we propose FlowFSM, a novel agentic framework that leverages Large Language Models (LLMs) combined with prompt chaining and chain-of-thought reasoning to extract accurate FSMs from raw RFC documents. FlowFSM systematically processes protocol specifications, identifies state transitions, and constructs structured rule-books by chaining agent outputs. Experimental evaluation across FTP and RTSP protocols demonstrates that FlowFSM achieves high extraction precision while minimizing hallucinated transitions, showing promising results. Our findings highlight the potential of agent-based LLM systems in the advancement of protocol analysis and FSM inference for cybersecurity and reverse engineering applications.
The increasing need for robustness, reliability, and determinism in wireless networks for industrial and mission-critical applications is the driver for the growth of new innovative methods. The study presented in this work makes use of machine learning techniques to predict channel quality in a Wi-Fi network in terms of the frame delivery ratio. Predictions can be used proactively to adjust communication parameters at runtime and optimize network operations for industrial applications. Methods including convolutional neural networks and long short-term memory were analyzed on datasets acquired from a real Wi-Fi setup across multiple channels. The models were compared in terms of prediction accuracy and computational complexity. Results show that the frame delivery ratio can be reliably predicted, and convolutional neural networks, although slightly less effective than other models, are more efficient in terms of CPU usage and memory consumption. This enhances the model's usability on embedded and industrial systems.