Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Deep learning (DL) methods are widely used to extract high-dimensional patterns from the sequence features of radar echo signals. However, conventional DL algorithms face challenges such as redundant feature segments, and constraints from restricted model sizes. To address these issues, we propose a framework that integrates feature preprocessing with large language models (LLMs). Our preprocessing module tokenizes radar sequence features, applies a patch selection algorithm to filter out uninformative segments, and projects the selected patches into embeddings compatible with the feature space of pre-trained LLMs. Leveraging these refined embeddings, we incorporate a pre-trained LLM, fine-tuning only the normalization layers to reduce training burdens while enhancing performance. Experiments on measured datasets demonstrate that the proposed method significantly outperforms the state-of-the-art baselines on supervised learning tests.
Recent advances in pre-trained large language models (LLMs) have demonstrated their capacities to capture universal knowledge, making them promising general-purpose optimization solvers for wireless signal processing. Motivated by these findings, we take the first step towards fine-tuning pre-trained LLMs for the effective analysis of radar signal features in marine target detection tasks. Nevertheless, directly fine-tuning pre-trained LLMs on marine target detection tasks tends to suffer from pronounced overfitting, particularly in challenging low signal-to-clutter ratio (SCR) scenarios. This overfitting primarily stems from the model's tendency to memorize spurious or noisy feature patterns rather than learning discriminative structures that generalize well to unseen data. To address this challenge, we introduce RadarLLM, a novel fine-tuning framework that utilizes an effective preference-aware loss. Unlike conventional training strategies that uniformly optimize all feature tokens, this loss function selectively optimizes different feature patches based on their online evaluated learning values, thus guiding the model to focus on the most generalizable patterns during optimization. We theoretically demonstrate the effectiveness of the evaluated learning values by transforming the problem as selecting useful feature tokens. Extensive experiments on real-world marine radar datasets show that 1) the proposed loss function is much better than the original one, with particularly significant gains in challenging low SCR scenarios and 2) RadarLLM consistently outperforms state-of-the-art baselines across diverse detection scenarios, with particularly notable gains under limited training data conditions.
Branched broomrape (Phelipanche ramosa) is a chlorophyll-deficient parasitic weed that threatens tomato production by extracting nutrients from the host. We investigate early detection using leaf-level spectral reflectance (400-2500 nm) and ensemble machine learning. In a field experiment in Woodland, California, we tracked 300 tomato plants across growth stages defined by growing degree days (GDD). Leaf reflectance was acquired with a portable spectrometer and preprocessed (band denoising, 1 nm interpolation, Savitzky-Golay smoothing, correlation-based band reduction). Clear class differences were observed near 1500 nm and 2000 nm water absorption features, consistent with reduced leaf water content in infected plants at early stages. An ensemble combining Random Forest, XGBoost, SVM with RBF kernel, and Naive Bayes achieved 89% accuracy at 585 GDD, with recalls of 0.86 (infected) and 0.93 (noninfected). Accuracy declined at later stages (e.g., 69% at 1568 GDD), likely due to senescence and weed interference. Despite the small number of infected plants and environmental confounders, results show that proximal sensing with ensemble learning enables timely detection of broomrape before canopy symptoms are visible, supporting targeted interventions and reduced yield losses.
Fluid antenna systems (FAS) have recently emerged as a promising solution for sixth-generation (6G) ultra-dense connectivity. These systems utilize dynamic radiating and/or shaping techniques to mitigate interference and improve spectral efficiency without relying on channel state information (CSI). The reported improvements achieved by employing a single dynamically activated radiating position in fluid antenna multiple access (FAMA) are significant. To fully realize the potential of FAMA in multi-user multiplexing, we propose leveraging the unique fast-switching capabilities of a single radio-frequency (RF)-chain meta-fluid antenna structure to achieve multi-activation. This allows for a significantly larger set of independent radiating states without requiring additional signal processing. Simulations demonstrate that multi-activation FAMA enables robust multi-user multiplexing with a higher signal-to-interference ratio (SIR) under various Rayleigh-fading environments compared to other single RF-chain technologies. We further show that the SIR can be optimized within a 15~$\mu s$ timeframe under a multi-user Rayleigh-fading channel, making the proposed scheme highly suitable for fast-changing wireless environments. Verified through the theoretical Jakes' model, full three-dimensional (3D) electromagnetic (EM) simulations and experimental validation, multi-activation FAMA enables effective CSI-free, multi-user communication, offering a scalable solution for high-capacity wireless networks.
The selection of nodes that can serve as cluster heads, local sinks and gateways is a critical challenge in distributed sensor and communication networks. This paper presents a novel framework for identifying a minimal set of nexus nodes to ensure full network coverage while minimizing cost. By formulating the problem as a convex relaxation of the NP-hard set cover problem, we integrate the graph theoretic centrality measures of node degree and betweenness centrality into a cost function optimized via a relaxed L1-norm minimization. The proposed approach is applicable to static and dynamic network scenarios and does not require location or distance estimation. Through simulations across various graph models and dynamic conditions, it is shown that the method achieves faster execution times (lower complexity) and competitive sparsity compared to classical greedy and genetic algorithms (GA), offering a robust, distributed, and cost-efficient node selection solution.
We revisit the problem of spectral clustering in multimodal settings, where each data modality is encoded as a graph Laplacian. While classical approaches--including joint diagonalization, spectral co-regularization, and multiview clustering--attempt to align embeddings across modalities, they often rely on costly iterative refinement and may fail to directly target the spectral subspace relevant for clustering. In this work, we introduce two key innovations. First, we bring the power of randomization to this setting by sampling random convex combinations of Laplacians as a simple and scalable alternative to explicit eigenspace alignment. Second, we propose a principled selection rule based on Bottom-$k$ Aggregated Spectral Energy (BASE)--a $k$-dimensional extension of the directional smoothness objective from recent minimax formulations--which we uniquely apply as a selection mechanism rather than an optimization target. The result is Randomized Joint Diagonalization with BASE Selection (RJD-BASE), a method that is easily implementable, computationally efficient, aligned with the clustering objective, and grounded in decades of progress in standard eigensolvers. Through experiments on synthetic and real-world datasets, we show that RJD-BASE reliably selects high-quality embeddings, outperforming classical multimodal clustering methods at low computational cost.
Ray tracing (RT) simulations require accurate transmitter (TX) and receiver (RX) location information from real-world measurements to accurately characterize wireless propagation behavior in an environment. Such wireless propagation measurements typically employ GPS-based logging for TX/RX locations, which can produce meter-level errors that lead to unreliable RT calibration and validation. These location misalignments cause inaccurate interactions between RT-generated multipath components (MPCs) and the modeled 3D environment, which lead to erroneous channel predictions, and severe discrepancies between simulated and measured power delay profiles (PDPs) and channel characteristics. Moreover, the same RT-generated PDPs using inaccurate locations result in calibration errors when adjusting material properties such as conductivity and permittivity. This paper presents a systematic multi-stage TX/RX location calibration framework to correct location errors and consequently align measured and simulated omnidirectional PDPs. Optimization is performed using a computationally efficient multi-stage grid search and the Powell method. Applying the location calibration framework to NYU WIRELESS urban-microcell (UMi) measurements at 6.75 GHz and 16.95 GHz corrected TX/RX location errors of up to 7 m. The framework reduced the composite loss function by 42.3\% for line-of-sight (LOS) and 13.5\% for non-line-of-sight (NLOS) scenarios. Furthermore, peak power prediction accuracy improved by approximately 1 dB on average. Such improved geometric alignment enables accurate channel prediction, vital for beam management and infrastructure deployment for next-generation wireless networks.
Sun-induced fluorescence (SIF) as a close remote sensing based proxy for photosynthesis is accepted as a useful measure to remotely monitor vegetation health and gross primary productivity. In this work we present the new retrieval method WAFER (WAvelet decomposition FluorEscence Retrieval) based on wavelet decompositions of the measured spectra of reflected radiance as well as a reference radiance not containing fluorescence. By comparing absolute absorption line depths by means of the corresponding wavelet coefficients, a relative reflectance is retrieved independently of the fluorescence, i.e. without introducing a coupling between reflectance and fluorescence. The fluorescence can then be derived as the remaining offset. This method can be applied to arbitrary chosen wavelength windows in the whole spectral range, such that all the spectral data available is exploited, including the separation into several frequency (i.e. width of absorption lines) levels and without the need of extensive training datasets. At the same time, the assumptions about the reflectance shape are minimal and no spectral shape assumptions are imposed on the fluorescence, which not only avoids biases arising from wrong or differing fluorescence models across different spatial scales and retrieval methods but also allows for the exploration of this spectral shape for different measurement setups. WAFER is tested on a synthetic dataset as well as several diurnal datasets acquired with a field spectrometer (FloX) over an agricultural site. We compare the WAFER method to two established retrieval methods, namely the improved Fraunhofer line discrimination (iFLD) method and spectral fitting method (SFM) and find a good agreement with the added possibility of exploring the true spectral shape of the offset signal and free choice of the retrieval window. (abbreviated)
Beam training and prediction in millimeter-wave communications are highly challenging due to fast time-varying channels and sensitivity to blockages and mobility. In this context, infrastructure-mounted cameras can capture rich environmental information that can facilitate beam tracking design. In this work, we develop an efficient attention-enhanced machine learning model for long-term beam tracking built upon convolutional neural networks and gated recurrent units to predict both current and future beams from past observed images. The integrated temporal attention mechanism substantially improves its predictive performance. Numerical results demonstrate that the proposed design achieves Top-5 beam prediction accuracies exceeding 90% across both current and six future time slots, significantly reducing overhead arising from sensing and processing for beam training. It further attains 97% of state-of-the-art performance with only 3% of the computational complexity.
The rapid development of the low-altitude economy has imposed unprecedented demands on wireless infrastructure to accommodate large-scale drone deployments and facilitate intelligent services in dynamic airspace environments. However, unlocking its full potential in practical applications presents significant challenges. Traditional aerial systems predominantly focus on air-ground communication services, often neglecting the integration of sensing, computation, control, and energy-delivering functions, which hinders the ability to meet diverse mission-critical demands. Besides, the absence of systematic low-altitude airspace planning and management exacerbates issues regarding dynamic interference in three-dimensional space, coverage instability, and scalability. To overcome these challenges, a comprehensive framework, termed low-altitude wireless network (LAWN), has emerged to seamlessly integrate communication, sensing, computation, control, and air traffic management into a unified design. This article provides a comprehensive overview of LAWN systems, introducing LAWN system fundamentals and the evolution of functional designs. Subsequently, we delve into performance evaluation metrics and review critical concerns surrounding privacy and security in the open-air network environment. Finally, we present the cutting-edge developments in airspace structuring and air traffic management, providing insights to facilitate the practical deployment of LAWNs.
Cardiovascular diseases (CVDs) are the leading cause of death worldwide, accounting for approximately 17.9 million deaths each year. Early detection is critical, creating a demand for accurate and inexpensive pre-screening methods. Deep learning has recently been applied to classify abnormal heart sounds indicative of CVDs using synchronised phonocardiogram (PCG) and electrocardiogram (ECG) signals, as well as multichannel PCG (mPCG). However, state-of-the-art architectures remain underutilised due to the limited availability of synchronised and multichannel datasets. Augmented datasets and pre-trained models provide a pathway to overcome these limitations, enabling transformer-based architectures to be trained effectively. This work combines traditional signal processing with denoising diffusion models, WaveGrad and DiffWave, to create an augmented dataset to fine-tune a Wav2Vec 2.0-based classifier on multimodal and multichannel heart sound datasets. The approach achieves state-of-the-art performance. On the Computing in Cardiology (CinC) 2016 dataset of single channel PCG, accuracy, unweighted average recall (UAR), sensitivity, specificity and Matthew's correlation coefficient (MCC) reach 92.48\%, 93.05\%, 93.63\%, 92.48\%, 94.93\% and 0.8283, respectively. Using the synchronised PCG and ECG signals of the training-a dataset from CinC, 93.14\%, 92.21\%, 94.35\%, 90.10\%, 95.12\% and 0.8380 are achieved for accuracy, UAR, sensitivity, specificity and MCC, respectively. Using a wearable vest dataset consisting of mPCG data, the model achieves 77.13\% accuracy, 74.25\% UAR, 86.47\% sensitivity, 62.04\% specificity, and 0.5082 MCC. These results demonstrate the effectiveness of transformer-based models for CVD detection when supported by augmented datasets, highlighting their potential to advance multimodal and multichannel heart sound classification.
A radio map captures the spatial distribution of wireless channel parameters, such as the strength of the signal received, across a geographic area. The problem of fine-grained three-dimensional (3D) radio map estimation involves inferring a high-resolution radio map for the two-dimensional (2D) area at an arbitrary target height within a 3D region of interest, using radio samples collected by sensors sparsely distributed in that 3D region. Solutions to the problem are crucial for efficient spectrum management in 3D spaces, particularly for drones in the rapidly developing low-altitude economy. However, this problem is challenging due to ultra-sparse sampling, where the number of collected radio samples is far fewer than the desired resolution of the radio map to be estimated. In this paper, we design a Large Artificial Intelligence Model (LAM) called RadioLAM for the problem. RadioLAM employs the creative power and the strong generalization capability of LAM to address the ultra-sparse sampling challenge. It consists of three key blocks: 1) an augmentation block, using the radio propagation model to project the radio samples collected at different heights to the 2D area at the target height; 2) a generation block, leveraging an LAM under an Mixture of Experts (MoE) architecture to generate a candidate set of fine-grained radio maps for the target 2D area; and 3) an election block, utilizing the radio propagation model as a guide to find the best map from the candidate set. Extensive simulations show that RadioLAM is able to solve the fine-grained 3D radio map estimation problem efficiently from an ultra-low sampling rate of 0.1%, and significantly outperforms the state-of-the-art.
Stacked intelligent metasurface (SIM) and dual-polarized SIM (DPSIM) enabled wave-domain signal processing have emerged as promising research directions for offloading baseband digital processing tasks and efficiently simplifying transceiver design. However, existing architectures are limited to employing SIM (DPSIM) for a single communication function, such as precoding or combining. To further enhance the overall performance of SIM (DPSIM)-assisted systems and achieve end-to-end (E2E) joint optimization from the transmitted bitstream to the received bitstream, we propose an SIM (DPSIM)- assisted E2E orthogonal frequency-division multiplexing (OFDM) system, where traditional communication tasks such as modulation, precoding, combining, and demodulation are performed simultaneously during electromagnetic (EM) forward propagation. Furthermore, inspired by the idea of abstracting real metasurfaces as hidden layers of a neural network, we propose the electromagnetic neural network (EMNN) to enable the control of the E2E OFDM communication system. In addition, transfer learning is introduced into the model training, and a training and deployment framework for the EMNN is designed. Simulation results demonstrate that both SIM-assisted E2E OFDM systems and DPSIM-assisted E2E OFDM systems can achieve robust bitstream transmission under complex channel conditions. Our study highlights the application potential of EMNN and SIM (DPSIM)-assisted E2E OFDM systems in the design of next-generation transceivers.
A systematic approach for high-speed via transition design is proposed. The effects of via barrel radius, anti-pad size, and the distance from adjacent stitching (GND) vias on bandwidth are analyzed and characterized. Guidelines for selecting parameter values are provided and validated by correlating 3D full-wave FEM simulation results with actual measurements of the coupon board. When a sufficient number of stitching vias are used, the via structure can be approximated as a coaxial transmission line. The proposed methodology builds on this approximation and also considers high-order modes. With this framework, engineers can easily optimize design parameters while intuitively understanding how geometry affects bandwidth. This approach also allows engineers with limited access to expensive and computationally intensive 3D FEM tools to design high bandwidth vias up to 67 GHz.
Cooperative reconfigurable intelligent surfaces (RISs) are promising technologies for 6G networks to support a great number of users. Compared with the fixed RISs, the properly deployed RISs may improve the communication performance with less communication energy consumption, thereby improving the energy efficiency. In this paper, we consider a cooperative unmanned aerial vehicle-mounted RISs (UAV-RISs)-assisted cellular network, where multiple RISs are carried and enhanced by UAVs to serve multiple ground users (GUs) simultaneously such that achieving the three-dimensional (3D) mobility and opportunistic deployment. Specifically, we formulate an energy-efficient communication problem based on multi-objective optimization framework (EEComm-MOF) to jointly consider the beamforming vector of base station (BS), the location deployment and the discrete phase shifts of UAV-RIS system so as to simultaneously maximize the minimum available rate over all GUs, maximize the total available rate of all GUs, and minimize the total energy consumption of the system, while the transmit power constraint of BS is considered. To comprehensively solve EEComm-MOF which is an NP-hard and non-convex problem with constraints, a non-dominated sorting genetic algorithm-II with a continuous solution processing mechanism, a discrete solution processing mechanism, and a complex solution processing mechanism (INSGA-II-CDC) is proposed. Simulations results demonstrate that the proposed INSGA-II-CDC can solve EEComm-MOF effectively and outperforms other benchmarks under different parameter settings. Moreover, the stability of INSGA-II-CDC and the effectiveness of the improved mechanisms are verified. Finally, the implementability analysis of the algorithm is given.
In this project, we wanted to discover an analog topology that could effectively convert amplitude-modulated (AM) signals to frequency-modulated (FM) signals, while also ensuring that both sets of signals were within their respective radio frequency (RF) bands. To that end, an effective topology for doing so was developed, characterized, and demonstrated, requiring the ability to de-modulate incoming signals from the AM radio band--spanning from 530 kHz to 1700 kHz--and re-modulate these signals into the FM radio band--spanning from 88 MHz to 108 MHz. These bands are separated by roughly 86 MHz, presenting the need for the topology to radically alter the incoming frequency before re-broadcasting. At its simplest implementation, this required an AM demodulation circuit coupled to a voltage controlled oscillator (VCO). Together, these two circuits translated variations in the incoming envelope signal to variations in the output frequency while still maintaining high-fidelity audio, similar to how existing radio receiving and broadcasting are done. Altogether, the project not only developed a working system but also provided valuable instruction in the design, analysis, and construction of effective RF circuits--invaluable to future endeavors within analog electronics.
Motivated by the constant modulus property of frequency shift keying (FSK) based waveforms and the stabilisation of its radar performance with an increase in the number of subpulses, in this paper an FSK-based dynamic subpulse number joint communications and radar waveform design is proposed. From a communications point of view, the system operates based on traditional FSK modulation. From a sensing point of view, although the subpulses are continuously generated and transmitted, radar waveforms are dynamically formed by monitoring the flatness of the spectrum which in turn guarantees the accuracy of the delay estimation. Other constraints on the waveform length are used to ensure satisfactory values of the root mean square time duration, ambiguity function sidelobe levels and prevent overly long waveforms. To provide an estimation of the probability of generating extremely long waveforms, the distribution of the number of subpulses is approximated using a Brownian motion process and an existing result on its one-sided exit density. Numerical examples are provided to evaluate the accuracy of the approximate distribution, as well as the ambiguity function sidelobe levels and the delay and Doppler shift estimation performance of the transmitted waveforms.
Infrastructure-mounted sensors can capture rich environmental information to enhance communications and facilitate beamforming in millimeter-wave systems. This work presents an efficient sensing-assisted long-term beam tracking framework that selects optimal beams from a codebook for current and multiple future time slots. We first design a large attention-enhanced neural network (NN) to fully exploit past visual observations for beam tracking. A convolutional NN extracts compact image features, while gated recurrent units with attention capture the temporal dependencies within sequences. The large NN then acts as the teacher to guide the training of a lightweight student NN via knowledge distillation. The student requires shorter input sequences yet preserves long-term beam prediction ability. Numerical results demonstrate that the teacher achieves Top-5 accuracies exceeding 93% for current and six future time slots, approaching state-of-the-art performance with a 90% complexity reduction. The student closely matches the teacher's performance while cutting complexity by another 90%, despite operating with 60% shorter input sequences. This improvement significantly enhances data efficiency, reduces latency, and lowers power consumption in sensing and processing.