Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Ray-tracing (RT) simulators are essential for wireless digital twins, enabling accurate site-specific radio channel prediction for next-generation wireless systems. Yet, RT simulation accuracy is often limited by insufficient measurement data and a lack of systematic validation. This paper presents site-specific location calibration and validation of NYURay, NYU's in-house ray tracer, at upper mid-band frequencies (6.75 GHz and 16.95 GHz). We propose a location calibration algorithm that corrects GPS-induced position errors by optimizing transmitter-receiver (TX-RX) locations to align simulated and measured power delay profiles, improving TX-RX location accuracy by 42.3% for line-of-sight (LOS) and 13.5% for non-line-of-sight (NLOS) scenarios. Validation across 18 TX-RX locations shows excellent RT accuracy in path loss prediction, with path loss exponent (PLE) deviations under 0.14. While RT underestimates delay spread and angular spreads, their cumulative distributions remain statistically similar. The validated NYURay advances RT validation and provides reliable channel statistics for 6G deployment.
This paper proposes a grant-free coded random access (CRA) scheme for uplink massive machine-type communications (mMTC), based on Zak-orthogonal time frequency space (Zak-OTFS) modulation in the delay-Doppler domain. The scheme is tailored for doubly selective wireless channels, where conventional orthogonal frequency-division multiplexing (OFDM)-based CRA suffers from unreliable inter-slot channel prediction due to time-frequency variability. By exploiting the predictable nature of Zak-OTFS, the proposed approach enables accurate channel estimation across slots, facilitating reliable successive interference cancellation across user packet replicas. A fair comparison with an OFDM-based CRA baseline shows that the proposed scheme achieves significantly lower packet loss rates under high mobility and user density. Extensive simulations over the standardized Veh-A channel confirm the robustness and scalability of Zak-OTFS-based CRA, supporting its applicability to future mMTC deployments.
This paper explores the near field (NF) covert communication with the aid of rate-splitting multiple access (RSMA) and reconfigurable intelligent surfaces (RIS). In particular, the RIS operates in the NF of both the legitimate user and the passive adversary, enhancing the legitimate users received signal while suppressing the adversarys detection capability. Whereas, the base station (BS) applies RSMA to increase the covert communication rate composed of a private and a shared rate component. To characterize system covertness, we derive closed form expressions for the detection error probability (DEP), outage probability (OP), and optimal detection threshold for the adversary. We formulate a non-convex joint beamforming optimization problem at the BS and RIS under unit-modulus constraints to maximize the covert rate. To tackle this, we propose an alternating optimization (AO) algorithm, where the BS beamformer is designed using a two-stage iterative method based on successive convex approximation (SCA). Additionally, two low-complexity techniques are introduced to further reduce the adversarys received power. Simulation results demonstrate that the proposed algorithm effectively improves the covert communication rate, highlighting the potential of near field RSMA-RIS integration in covert communication.
Pain is a complex condition affecting a large portion of the population. Accurate and consistent evaluation is essential for individuals experiencing pain, and it supports the development of effective and advanced management strategies. Automatic pain assessment systems provide continuous monitoring and support clinical decision-making, aiming to reduce distress and prevent functional decline. This study has been submitted to the \textit{Second Multimodal Sensing Grand Challenge for Next-Gen Pain Assessment (AI4PAIN)}. The proposed method introduces a pipeline that leverages respiration as the input signal and incorporates a highly efficient cross-attention transformer alongside a multi-windowing strategy. Extensive experiments demonstrate that respiration is a valuable physiological modality for pain assessment. Moreover, experiments revealed that compact and efficient models, when properly optimized, can achieve strong performance, often surpassing larger counterparts. The proposed multi-window approach effectively captures both short-term and long-term features, as well as global characteristics, thereby enhancing the model's representational capacity.
In this paper, we investigate a bistatic integrated sensing and communications (ISAC) system, consisting of a multi-antenna base station (BS), a multi-antenna sensing receiver, a single-antenna communication user (CU), and a point target to be sensed. Specifically, the BS transmits a superposition of Gaussian information and deterministic sensing signals. The BS aims to deliver information symbols to the CU, while the sensing receiver aims to estimate the target's direction-of-arrival (DoA) with respect to the sensing receiver by processing the echo signals. For the sensing receiver, we assume that only the sequences of the deterministic sensing signals and the covariance matrix of the information signals are perfectly known, whereas the specific realizations of the information signals remain unavailable. Under this setup, we first derive the corresponding Cram\'er-Rao bounds (CRBs) for DoA estimation and propose practical estimators to accurately estimate the target's DoA. Subsequently, we formulate the transmit beamforming design as an optimization problem aiming to minimize the CRB, subject to a minimum signal-to-interference-plus-noise ratio (SINR) requirement at the CU and a maximum transmit power constraint at the BS. When the BS employs only Gaussian information signals, the resulting beamforming optimization problem is convex, enabling the derivation of an optimal solution. In contrast, when both Gaussian information and deterministic sensing signals are transmitted, the resulting problem is non-convex and a locally optimal solution is acquired by exploiting successive convex approximation (SCA). Finally, numerical results demonstrate that employing Gaussian information signals leads to a notable performance degradation for target sensing and the proposed transmit beamforming design achieves a superior ISAC performance boundary compared with various benchmark schemes.
The optimization of the \gls{pdpr} is a recourse that helps wireless systems to acquire channel state information while minimizing the pilot overhead. While the optimization of the \gls{pdpr} in cellular networks has been studied extensively, the effect of the \gls{pdpr} in \gls{ris}-assisted networks has hardly been examined. This paper tackles this optimization when the communication is assisted by a RIS whose phase shifts are adjusted on the basis of the statistics of the channels. For a setting representative of a macrocellular deployment, the benefits of optimizing the PDPR are seen to be significant over a broad range of operating conditions. These benefits, demonstrated through the ergodic minimum mean squared error, for which a closed-form solution is derived, become more pronounced as the number of RIS elements and/or the channel coherence grow large.
This paper considers wireless communication assisted by a reconfigurable intelligent surface (RIS), focusing on the two-timescale approach, in which the RIS phase shifts are optimized based on channel statistics to mitigate the overheads associated with channel estimation. It is shown that, while the power captured by the RIS scales linearly with the number of its elements, the two-timescale beamforming gain upon re-radiation towards the receiver saturates rapidly as the number of RIS elements increases, for a broad class of power angular spectra (PAS). The ultimate achievable gain is determined by the decay rate of the PAS in the angular domain, which directly influences how rapidly spatial correlations between RIS elements diminish. The implications of this saturation on the effectiveness of RIS-assisted communications are discussed.
Affine frequency division multiplexing (AFDM) is an emerging waveform candidate for future sixth generation (6G) systems offering a range of promising features, such as enhanced robustness in heterogeneous and high-mobility environments, as well as inherent suitability for integrated sensing and communications (ISAC) applications. In addition, unlike other candidates such as orthogonal time-frequency space (OTFS) modulation, AFDM provides several unique advantages that strengthen its relevance to practical deployment and standardization in 6G. Notably, as a natural generalization of orthogonal frequency division multiplexing (OFDM), strong backward compatibility with existing conventional systems is guaranteed, while also offering novel possibilities in waveform design, for example to enable physical-layer security through its inherent chirp parametrization. In all, this article provides an overview of AFDM, emphasizing its suitability as a candidate waveform for 6G standardization. First, we provide a concise introduction to the fundamental properties and unique characteristics of AFDM, followed by highlights of its advantageous features, and finally a discussion of its potential and challenges in 6G standardization efforts and representative requirements.
Federated Learning (FL) enables distributed model training on edge devices while preserving data privacy. However, FL deployments in wireless networks face significant challenges, including communication overhead, unreliable connectivity, and high energy consumption, particularly in dynamic environments. This paper proposes EcoFL, an integrated FL framework that leverages the Open Radio Access Network (ORAN) architecture with multiple Radio Access Technologies (RATs) to enhance communication efficiency and ensure robust FL operations. EcoFL implements a two-stage optimisation approach: an RL-based rApp for dynamic RAT selection that balances energy efficiency with network performance, and a CNN-based xApp for near real-time resource allocation with adaptive policies. This coordinated approach significantly enhances communication resilience under fluctuating network conditions. Experimental results demonstrate competitive FL model performance with 19\% lower power consumption compared to baseline approaches, highlighting substantial potential for scalable, energy-efficient collaborative learning applications.
The deployment of AI agents within legacy Radio Access Network (RAN) infrastructure poses significant safety and reliability challenges for future 6G networks. This paper presents a novel Edge AI framework for autonomous network optimisation in Open RAN environments, addressing these challenges through three core innovations: (1) a persona-based multi-tools architecture enabling distributed, context-aware decision-making; (2) proactive anomaly detection agent powered by traffic predictive tool; and (3) a safety, aligned reward mechanism that balances performance with operational stability. Integrated into the RAN Intelligent Controller (RIC), our framework leverages multimodal data fusion, including network KPIs, a traffic prediction model, and external information sources, to anticipate and respond to dynamic network conditions. Extensive evaluation using realistic 5G scenarios demonstrates that the edge framework achieves zero network outages under high-stress conditions, compared to 8.4% for traditional fixed-power networks and 3.3% for large language model (LLM) agent-based approaches, while maintaining near real-time responsiveness and consistent QoS. These results establish that, when equipped with the right tools and contextual awareness, AI agents can be safely and effectively deployed in critical network infrastructure, laying the framework for intelligent and autonomous 5G and beyond network operations.
Cell-free massive multiple-input multiple-output (MIMO) implemented in virtualized cloud radio access networks (V-CRAN) has emerged as a promising architecture to enhance spectral efficiency (SE), network flexibility, and energy efficiency (EE) in next-generation wireless systems. In this work, we develop a holistic optimization framework for the efficient deployment of cell-free massive MIMO in V-CRAN with multiple mobile network operators (MNOs). Specifically, we formulate a set of mixed-integer programming (MIP) models to jointly optimize access point (AP) selection, user equipment (UE) association, cloud resource allocation, and MNO assignment while minimizing the maximum total power consumption (TPC) across MNOs. We consider two different scenarios based on whether UEs can be assigned to arbitrary MNOs or not. The numerical results demonstrate the impact of different deployment assumptions on power consumption, highlighting that flexible UE-MNO assignment significantly reduces TPC. The findings provide key insights into optimizing resource management in cell-free massive MIMO V-CRAN, paving the way for energy-efficient wireless network implementations.
Cell-free massive MIMO (multiple-input multiple-output) is a key enabler for the sixth generation (6G) of mobile networks, offering significant spectral and energy efficiency gains through user-centric operation of distributed access points (APs). However, its reliance on low-cost APs introduces inevitable hardware impairments, whose combined impact on wideband downlink systems remains unexplored when analyzed using behavioral models. This paper presents a comprehensive analysis of the downlink spectral efficiency (SE) in cell-free massive MIMO-OFDM systems under practical hardware impairments, including phase noise and third-order power amplifier nonlinearities. Both centralized and distributed precoding strategies are examined. By leveraging the Bussgang decomposition, we derive an SE expression and quantify the relative impact of impairments through simulations. Our results reveal that phase noise causes more severe degradation than power amplifier distortions, especially in distributed operation, highlighting the need for future distortion-aware precoding designs.
Cell-free massive MIMO is a key 6G technology, offering superior spectral and energy efficiency. However, its dense deployment of low-cost access points (APs) makes hardware impairments unavoidable. While narrowband impairments are well-studied, their impact in wideband systems remains unexplored. This paper provides the first comprehensive analysis of hardware impairments, such as nonlinear distortion in low-noise amplifiers, phase noise, in-phase-quadrature imbalance, and low-resolution analog-to-digital converters, on uplink spectral efficiency in cell-free massive MIMO. Using an OFDM waveform and centralized processing, APs share channel state information for joint uplink combining. Leveraging Bussgang decomposition, we derive a distortion-aware combining vector that optimizes spectral efficiency by modeling distortion as independent colored noise.
Massive multiple input and multiple output (MIMO) systems with orthogonal frequency division multiplexing (OFDM) are foundational for downlink multi-user (MU) communication in future wireless networks, for their ability to enhance spectral efficiency and support a large number of users simultaneously. However, high user density intensifies severe inter-user interference (IUI) and pilot overhead. Consequently, existing blind and semi-blind channel estimation (CE) and signal detection (SD) algorithms suffer performance degradation and increased complexity, especially when further challenged by frequency-selective channels and high-order modulation demands. To this end, this paper proposes a novel semi-blind joint channel estimation and signal detection (JCESD) method. Specifically, the proposed approach employs a hybrid precoding architecture to suppress IUI. Furthermore we formulate JCESD as a non-convex constellation fitting optimization exploiting constellation affine invariance. Few pilots are used to achieve coarse estimation for initialization and ambiguity resolution. For high-order modulations, a data augmentation mechanism utilizes the symmetry of quadrature amplitude modulation (QAM) constellations to increase the effective number of samples. To address frequency-selective channels, CE accuracy is then enhanced via an iterative refinement strategy that leverages improved SD results. Simulation results demonstrate an average throughput gain of 11\% over widely used pilot-based methods in MU scenarios, highlighting the proposed method's potential to improve spectral efficiency.
This paper proposes a novel framework for causal discovery with asymmetric error control, called Neyman-Pearson causal discovery. Despite the importance of applications where different types of edge errors may have different importance, current state-of-the-art causal discovery algorithms do not differentiate between the types of edge errors, nor provide any finite-sample guarantees on the edge errors. Hence, this framework seeks to minimize one type of error while keeping the other below a user-specified tolerance level. Using techniques from information theory, fundamental performance limits are found, characterized by the R\'enyi divergence, for Neyman-Pearson causal discovery. Furthermore, a causal discovery algorithm is introduced for the case of linear additive Gaussian noise models, called epsilon-CUT, that provides finite-sample guarantees on the false positive rate, while staying competitive with state-of-the-art methods.
To address limitations of the graph fractional Fourier transform (GFRFT) Wiener filtering and the traditional joint time-vertex fractional Fourier transform (JFRFT) Wiener filtering, this study proposes a filtering method based on the hyper-differential form of the JFRFT. The gradient backpropagation mechanism is employed to enable the adaptive selection of transform order pair and filter coefficients. First, leveraging the hyper-differential form of the GFRFT and the fractional Fourier transform, the hyper-differential form of the JFRFT is constructed and its properties are analyzed. Second, time-varying graph signals are divided into dynamic graph sequences of equal span along the temporal dimension. A spatiotemporal joint representation is then established through vectorized reorganization, followed by the joint time-vertex Wiener filtering. Furthermore, by rigorously proving the differentiability of the transform orders, both the transform orders and filter coefficients are embedded as learnable parameters within a neural network architecture. Through gradient backpropagation, their synchronized iterative optimization is achieved, constructing a parameters-adaptive learning filtering framework. This method leverages a model-driven approach to learn the optimal transform order pair and filter coefficients. Experimental results indicate that the proposed framework improves the time-varying graph signals denoising performance, while reducing the computational burden of the traditional grid search strategy.
The one-dimensional (1D) fractional Fourier transform (FRFT) generalizes the 1D Fourier transform, offering significant advantages in time-frequency analysis of non-stationary signals. To extend the benefits of the 1D FRFT to higher-dimensional signals, 2D FRFTs, such as the 2D separable FRFT (SFRFT), gyrator transform (GT), and coupled FRFT (CFRFT), have been developed. However, existing 2D FRFTs suffer from several limitations: (1) a lack of theoretical uniformity and general applicability, (2) an inability to handle 2D non-stationary signals with nonseparable terms, and (3) failure to maintain a consistent 4D rotational relationship with the 2D Wigner distribution (WD), which is essential for ensuring geometric consistency and symmetry in time-frequency analysis. These limitations restrict the methods' performance in practical applications, such as radar, communication, sonar, and optical imaging, in which nonseparable terms frequently arise. To address these challenges, we introduce a more general definition of the 2D FRFT, termed the 2D nonseparable FRFT (NSFRFT). The 2D NSFRFT has four degrees of freedom, includes the 2D SFRFT, GT, and CFRFT as special cases, and maintains a more general 4D rotational relationship with the 2D WD. We derive its properties and present three discrete algorithms, two of which are fast algorithms with computational complexity $O(N^2 \log N)$ comparable to that of the 2D SFRFT. Numerical simulations and experiments demonstrate the superior performance of the 2D NSFRFT in applications such as image encryption, decryption, filtering, and denoising.
The rapid advancement in large foundation models is propelling the paradigm shifts across various industries. One significant change is that agents, instead of traditional machines or humans, will be the primary participants in the future production process, which consequently requires a novel AI-native communication system tailored for agent communications. Integrating the ability of large language models (LLMs) with task-oriented semantic communication is a potential approach. However, the output of existing LLM is human language, which is highly constrained and sub-optimal for agent-type communication. In this paper, we innovatively propose a task-oriented agent communication system. Specifically, we leverage the original LLM to learn a specialized machine language represented by token embeddings. Simultaneously, a multi-modal LLM is trained to comprehend the application task and to extract essential implicit information from multi-modal inputs, subsequently expressing it using machine language tokens. This representation is significantly more efficient for transmission over the air interface. Furthermore, to reduce transmission overhead, we introduce a joint token and channel coding (JTCC) scheme that compresses the token sequence by exploiting its sparsity while enhancing robustness against channel noise. Extensive experiments demonstrate that our approach reduces transmission overhead for downstream tasks while enhancing accuracy relative to the SOTA methods.