Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Detecting QoS anomalies in 5G user planes requires fine-grained per-flow visibility, but existing telemetry approaches face a fundamental trade-off. Coarse per-class counters are lightweight but mask transient and per-flow anomalies, while per-packet telemetry postcards provide full visibility at prohibitive cost that grows linearly with line rate. Selective postcard schemes reduce overhead but miss anomalies that fall below configured thresholds or occur during brief intervals. We present Kestrel, a sketch-based telemetry system for 5G user planes that provides fine-grained visibility into key metric distributions such as latency tails and inter-arrival times at a fraction of the cost of per-packet postcards. Kestrel extends Count-Min Sketch with histogram-augmented buckets and per-queue partitioning, which compress per-packet measurements into compact summaries while preserving anomaly-relevant signals. We develop formal detectability guarantees that account for sketch collisions, yielding principled sizing rules and binning strategies that maximize anomaly separability. Our evaluations on a 5G testbed with Intel Tofino switches show that Kestrel achieves 10% better detection accuracy than existing selective postcard schemes while reducing export bandwidth by 10x.
The rise of ultra-dense LEO constellations creates a complex and asynchronous network environment, driven by their massive scale, dynamic topologies, and significant delays. This unique complexity demands an adaptive packet routing algorithm that is asynchronous, risk-aware, and capable of balancing diverse and often conflicting QoS objectives in a decentralized manner. However, existing methods fail to address this need, as they typically rely on impractical synchronous decision-making and/or risk-oblivious approaches. To tackle this gap, we introduce PRIMAL, an event-driven multi-agent routing framework designed specifically to allow each satellite to act independently on its own event-driven timeline, while managing the risk of worst-case performance degradation via a principled primal-dual approach. This is achieved by enabling agents to learn the full cost distribution of the targeted QoS objectives and constrain tail-end risks. Extensive simulations on a LEO constellation with 1584 satellites validate its superiority in effectively optimizing latency and balancing load. Compared to a recent risk-oblivious baseline, it reduces queuing delay by over 70%, and achieves a nearly 12 ms end-to-end delay reduction in loaded scenarios. This is accomplished by resolving the core conflict between naive shortest-path finding and congestion avoidance, highlighting such autonomous risk-awareness as a key to robust routing.
The promise of decentralized peer-to-peer (P2P) systems is fundamentally gated by the challenge of Network Address Translation (NAT) traversal, with existing solutions often reintroducing the very centralization they seek to avoid. This paper presents the first large-scale, longitudinal measurement study of a fully decentralized NAT traversal protocol, Direct Connection Upgrade through Relay (DCUtR), within the production libp2p-based IPFS network. Drawing on over 4.4 million traversal attempts from 85,000+ distinct networks across 167 countries, we provide a definitive empirical analysis of modern P2P connectivity. We establish a contemporary baseline success rate of $70\% \pm 7.1\%$ for the hole-punching stage, providing a crucial new benchmark for the field. Critically, we empirically refute the long-held 'tribal knowledge' of UDP's superiority for NAT traversal, demonstrating that DCUtR's high-precision, RTT-based synchronization yields statistically indistinguishable success rates for both TCP and QUIC ($\sim70\%$). Our analysis further validates the protocol's design for permissionless environments by showing that success is independent of relay characteristics and that the mechanism is highly efficient, with $97.6\%$ of successful connections established on the first attempt. Building on this analysis, we propose a concrete roadmap of protocol enhancements aimed at achieving universal connectivity and contribute our complete dataset to foster further research in this domain.
This whitepaper presents parts of the results of the REDMARS2 project conducted in 2021-2022, exploring the integration of Recursive Internetwork Architecture (RINA) concepts into Delay- and Disruption-Tolerant Networking (DTN) protocols. Using Bundle-in-Bundle Encapsulation (BIBE), we implemented scope-based separation mechanisms resulting in scalable DTNs. A key contribution of this work is the demonstration of practical BIBE-based use cases, including a realistic Solar System Internet communication scenario involving unmanned aerial vehicles (UAVs) and satellite relays. The evaluation, supported by field tests in collaboration with the European Space Agency (ESA), confirmed the viability of BIBE as a foundation for scalable, recursive, and interoperable DTN architectures.
Patching nodes is an effective network defense strategy for malware control at early stages, and its performance is primarily dependent on how accurately the infection propagation is characterized. In this paper, we aim to design a novel patching policy based on the susceptible-infected epidemic network model by incorporating the influence of patching delay--the type of delay that has been largely overlooked in designing patching policies in the literature, while being prevalent in practice. We first identify 'critical edges' that form a boundary to separate the most likely infected nodes from the nodes which would still remain healthy after the patching delay. We next leverage the critical edges to determine which nodes to be patched in light of limited patching resources at early stages. To this end, we formulate a constrained graph partitioning problem and use its solution to identify a set of nodes to patch or vaccinate under the limited resources, to effectively prevent malware propagation from getting through the healthy region. We numerically validate that our patching policy significantly outperforms other baseline policies in protecting the healthy nodes under limited patching resources and in the presence of patching delay.
The sixth generation (6G) wireless networks are envisioned to deliver ultra-low latency, massive connectivity, and high data rates, enabling advanced applications such as autonomous {unmaned aerial vehicles (UAV)} swarms and aerial edge computing. However, realizing this vision in Flying Ad Hoc Networks (FANETs) requires intelligent and adaptive clustering mechanisms to ensure efficient routing and resource utilization. This paper proposes a novel machine learning-driven framework for dynamic cluster formation and cluster head selection in 6G-enabled FANETs. The system leverages mobility prediction using {Extreme Gradient Boosting (XGBoost)} and a composite optimization strategy based on signal strength and spatial proximity to identify optimal cluster heads. To evaluate the proposed method, comprehensive simulations were conducted in both centralized (5G) and decentralized (6G) topologies using realistic video traffic patterns. Results show that the proposed model achieves significant improvements in delay, jitter, and throughput in decentralized scenarios. These findings demonstrate the potential of combining machine learning with clustering techniques to enhance scalability, stability, and performance in next-generation aerial networks.
The analytical characterization of coverage probability in finite three-dimensional wireless networks has long remained an open problem, hindered by the loss of spatial independence in finite-node settings and the coupling between link distances and interference in bounded geometries. This paper closes this gap by presenting the first exact analytical framework for coverage probability in finite 3D networks modeled by a binomial point process within a cylindrical region. To bypass the intractability that has long hindered such analyses, we leverage the independence structure, convolution geometry, and derivative properties of Laplace transforms, yielding a formulation that is both mathematically exact and computationally efficient. Extensive Monte Carlo simulations verify the analysis and demonstrate significant accuracy gains over conventional Poisson-based models. The results generalize to any confined 3D wireless system, including aerial, underwater, and robotic networks.
Massive message transmissions, unpredictable aperiodic messages, and high-speed moving vehicles contribute to the complex wireless environment, resulting in inefficient resource collisions in Vehicle to Everything (V2X). In order to achieve better medium access control (MAC) layer performance, 3GPP introduced several new features in NR-V2X. One of the most important is the re-evaluation mechanism. It allows the vehicle to continuously sense resources before message transmission to avoid resource collisions. So far, only a few articles have studied the re-evaluation mechanism of NR-V2X, and they mainly focus on network simulator that do not consider variable traffic, which makes analysis and comparison difficult. In this paper, an analytical model of NR-V2X Mode 2 is established, and a message generator is constructed by using discrete time Markov chain (DTMC) to simulate the traffic pattern recommended by 3GPP advanced V2X services. Our study shows that the re-evaluation mechanism improves the reliability of NR-V2X transmission, but there are still local improvements needed to reduce latency.
TheaterQ is a Linux qdisc designed for dynamic network emulation, addressing the limitations of static parameters in traditional tools like NetEm. By utilizing Trace Files containing timelines with network characteristics, TheaterQ achieves high-accuracy emulation of dynamic networks without involving the userspace and allows for resolutions of characteristic updates of up to 1 microsecond. Features include synchronization across mutliple qdisc instances and handling of delays, bandwidth, packet loss, duplication, and reordering. Evaluations show TheaterQ's accuracy and its comparable performance to existing tools, offering a flexible solution for modern communication protocol development. TheaterQ is available as open-source software under the GPLv2 license.
The increasing prevalence LEO satellite mega-constellations for global Internet coverage requires new approaches to evaluate the behavior of existing Internet protocols and applications. Traditional discrete event simulators like Hypatia allow for modeling these environments but fall short in evaluating real applications. This paper builds upon our previous work, in which we proposed a system design for trace-driven emulation of such satellite networks, bridging the gab between simulations and real-time testbeds. By extending the Hypatia framework, we record network path characteristics, e.g., delay and bandwidth, between two endpoints in the network during non-real-time simulations. Path characteristics are exported to Trace Files, which are replayed in real-time emulation environments on real systems, enabling evaluations with real software and human interaction. An advantage of our approach is its easy adaptability to existing simulation models. Our extensive evaluation involves multiple scenarios with different satellite constellations, illustrating the approach's accuracy in reproducing the behavior of satellite networks. Between full simulation, which serves as a baseline for our evaluation, and emulation runs, we observe high correlation metrics of up to 0.96, validating the approach's effectiveness. Challenges such as the lack of emulation-to-simulation feedback and synchronization issues are discussed.
The proliferation of Internet of Things (IoT) networks has created an urgent need for sustainable energy solutions, particularly for the battery-constrained spatially distributed IoT nodes. While low-altitude uncrewed aerial vehicles (UAVs) employed with wireless power transfer (WPT) capabilities offer a promising solution, the line-of-sight channels that facilitate efficient energy delivery also expose sensitive operational data to adversaries. This paper proposes a novel low-altitude UAV-carried movable antenna-enhanced transmission system joint WPT and covert communications, which simultaneously performs energy supplements to IoT nodes and establishes transmission links with a covert user by leveraging wireless energy signals as a natural cover. Then, we formulate a multi-objective optimization problem that jointly maximizes the total harvested energy of IoT nodes and sum achievable rate of the covert user, while minimizing the propulsion energy consumption of the low-altitude UAV. To address the non-convex and temporally coupled optimization problem, we propose a mixture-of-experts-augmented soft actor-critic (MoE-SAC) algorithm that employs a sparse Top-K gated mixture-of-shallow-experts architecture to represent multimodal policy distributions arising from the conflicting optimization objectives. We also incorporate an action projection module that explicitly enforces per-time-slot power budget constraints and antenna position constraints. Simulation results demonstrate that the proposed approach significantly outperforms some baseline approaches and other state-of-the-art deep reinforcement learning algorithms.
Anomaly detection in time-series data is a critical challenge with significant implications for network security. Recent quantum machine learning approaches, such as quantum kernel methods and variational quantum circuits, have shown promise in capturing complex data distributions for anomaly detection but remain constrained by limited qubit counts. We introduce in this work a novel Quantum Gated Recurrent Unit (QGRU)-based Generative Adversarial Network (GAN) employing Successive Data Injection (SuDaI) and a multi-metric gating strategy for robust network anomaly detection. Our model uniquely utilizes a quantum-enhanced generator that outputs parameters (mean and log-variance) of a Gaussian distribution via reparameterization, combined with a Wasserstein critic to stabilize adversarial training. Anomalies are identified through a novel gating mechanism that initially flags potential anomalies based on Gaussian uncertainty estimates and subsequently verifies them using a composite of critic scores and reconstruction errors. Evaluated on benchmark datasets, our method achieves a high time-series aware F1 score (TaF1) of 89.43% demonstrating superior capability in detecting anomalies accurately and promptly as compared to existing classical and quantum models. Furthermore, the trained QGRU-WGAN was deployed on real IBM Quantum hardware, where it retained high anomaly detection performance, confirming its robustness and practical feasibility on current noisy intermediate-scale quantum (NISQ) devices.
The use of Dynamic Random Access Memory (DRAM) for storing Machine Learning (ML) models plays a critical role in accelerating ML inference tasks in the next generation of communication systems. However, periodic refreshment of DRAM results in wasteful energy consumption during standby periods, which is significant for resource-constrained Internet of Things (IoT) devices. To solve this problem, this work advocates two novel approaches: 1) wireless memory activation and 2) wireless memory approximation. These enable the wireless devices to efficiently manage the available memory by considering the timing aspects and relevance of ML model usage; hence, reducing the overall energy consumption. Numerical results show that our proposed scheme can realize smaller energy consumption than the always-on approach while satisfying the retrieval accuracy constraint.
Vehicular fog computing (VFC) has emerged as a promising paradigm, which leverages the idle computational resources of nearby fog vehicles (FVs) to complement the computing capabilities of conventional vehicular edge computing. However, utilizing VFC to meet the delay-sensitive and computation-intensive requirements of the FVs poses several challenges. First, the limited resources of road side units (RSUs) struggle to accommodate the growing and diverse demands of vehicles. This limitation is further exacerbated by the information asymmetry between the controller and FVs due to the reluctance of FVs to disclose private information and to share resources voluntarily. This information asymmetry hinders the efficient resource allocation and coordination. Second, the heterogeneity in task requirements and the varying capabilities of RSUs and FVs complicate efficient task offloading, thereby resulting in inefficient resource utilization and potential performance degradation. To address these challenges, we first present a hierarchical VFC architecture that incorporates the computing capabilities of both RSUs and FVs. Then, we formulate a delay minimization optimization problem (DMOP), which is an NP-hard mixed integer nonlinear programming problem. To solve the DMOP, we propose a joint computing resource allocation and task offloading approach (JCRATOA). Specifically, we propose a convex optimization-based method for RSU resource allocation and a contract theory-based incentive mechanism for FV resource allocation. Moreover, we present a two-sided matching method for task offloading by employing the matching game. Simulation results demonstrate that the proposed JCRATOA is able to achieve superior performances in task completion delay, task completion ratio, system throughput, and resource utilization fairness, while effectively meeting the satisfying constraints.
The DNS is a key component of the Internet. Originally designed to facilitate the resolution of host names to IP addresses, its scope has continuously expanded over the years, today covering use cases such as load balancing or service discovery. While DNS was initially conceived as a rather static directory service in which resource records (RR) only change rarely, we have seen a number of use cases over the years where a DNS flavor that isn't purely based upon requesting and caching RRs, but rather on an active distribution of updates for all resolvers that showed interest in the respective records in the past, would be preferable. In this paper, we thus explore a publish-subscribe variant of DNS based on the Media-over-QUIC architecture, where we devise a strawman system and protocol proposal to enable pushing RR updates. We provide a prototype implementation, finding that DNS can benefit from a publish-subscribe variant: next to limiting update traffic, it can considerably reduce the time it takes for a resolver to receive the latest version of a record, thereby supporting use cases such as load balancing in content distribution networks. The publish-subscribe architecture also brings new challenges to the DNS, including a higher overhead for endpoints due to additional state management, and increased query latencies on first lookup, due to session establishment latencies.
In 5G mobile communication systems, MU-MIMO has been applied to enhance spectral efficiency and support high data rates. To maximize spectral efficiency while providing fairness among users, the base station (BS) needs to selects a subset of users for data transmission. Given that this problem is NP-hard, DRL-based methods have been proposed to infer the near-optimal solutions in real-time, yet this approach has an intrinsic security problem. This paper investigates how a group of adversarial users can exploit unsanitized raw CSIs to launch a throughput degradation attack. Most existing studies only focused on systems in which adversarial users can obtain the exact values of victims' CSIs, but this is impractical in the case of uplink transmission in LTE/5G mobile systems. We note that the DRL policy contains an observation normalizer which has the mean and variance of the observation to improve training convergence. Adversarial users can then estimate the upper and lower bounds of the local observations including the CSIs of victims based solely on that observation normalizer. We develop an attacking scheme FGGM by leveraging polytope abstract domains, a technique used to bound the outputs of a neural network given the input ranges. Our goal is to find one set of intentionally manipulated CSIs which can achieve the attacking goals for the whole range of local observations of victims. Experimental results demonstrate that FGGM can determine a set of adversarial CSI vector controlled by adversarial users, then reuse those CSIs throughout the simulation to reduce the network throughput of a victim up to 70\% without knowing the exact value of victims' local observations. This study serves as a case study and can be applied to many other DRL-based problems, such as a knapsack-oriented resource allocation problems.
The proliferation of large-scale distributed systems, such as satellite constellations and high-performance computing clusters, demands robust communication primitives that maintain coordination under unreliable links. The torus topology, with its inherent rotational and reflection symmetries, is a prevalent architecture in these domains. However, conventional routing schemes suffer from substantial packet loss during control-plane synchronization after link failures. This paper introduces a symmetry-driven asynchronous forwarding mechanism that leverages the torus's geometric properties to achieve reliable packet delivery without control-plane coordination. We model packet flow using a topological potential gradient and demonstrate that symmetry-breaking failures naturally induce a reverse flow, which we harness for fault circumvention. We propose two local forwarding strategies, Reverse Flow with Counter-facing Priority (RF-CF) and Lateral-facing Priority (RF-LF), that guarantee reachability to the destination via forward-flow phase transition points, without protocol modifications or additional in-packet overhead. Through percolation analysis and packet-level simulations on a 16 x 16 torus, we show that our mechanism reduces packet loss by up to 17.5% under a 1% link failure rate, with the RF-LF strategy contributing to 28% of successfully delivered packets. This work establishes a foundational link between topological symmetry and communication resilience, providing a lightweight, protocol-agnostic substrate for enhancing distributed systems.
Path-aware networking, a cornerstone of next-generation architectures like SCION and Multipath QUIC, empowers end-hosts with fine-grained control over traffic forwarding. This capability, however, introduces a critical stability risk: uncoordinated, greedy path selection by a multitude of agents can induce persistent, high-amplitude network oscillations. While this phenomenon is well-known, its quantitative performance impact across key metrics has remained poorly understood. In this paper, we address this gap by developing the first axiomatic framework for analyzing the joint dynamics of path selection and congestion control. Our model enables the formal characterization of the system's dynamic equilibria-the stable, periodic patterns of oscillation-and provides a suite of axioms to rate their performance in terms of efficiency, loss avoidance, convergence, fairness, and responsiveness. Our analysis reveals a fundamental trade-off in protocol design between predictable performance (efficiency, convergence) and user-centric goals (fairness, responsiveness). We prove, however, that no such trade-off exists among efficiency, convergence, and loss avoidance, which can be simultaneously optimized through careful parameter tuning. Furthermore, we find that agent migration can, counter-intuitively, enhance stability by de-synchronizing traffic, a theoretical result validated by our simulations. These findings provide a principled design map for engineering robust, high-performance protocols for the future path-aware Internet.