Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
It is well known that, given \(b\ge 0\), finding an $(a,b)$-trapping set with the minimum \(a\) in a binary linear code is NP-hard. In this paper, we demonstrate that this problem can be solved with linear complexity with respect to the code length for codes with bounded treewidth. Furthermore, suppose a tree decomposition corresponding to the treewidth of the binary linear code is known. In that case, we also provide a specific algorithm to compute the minimum \(a\) and the number of the corresponding \((a, b)\)-trapping sets for a given \(b\) with linear complexity. Simulation experiments are presented to verify the correctness of the proposed algorithm.
Sensing-assisted predictive beamforming, as one of the enabling technologies for emerging integrated sensing and communication (ISAC) paradigm, shows significant promise for enhancing various future unmanned aerial vehicle (UAV) applications. However, current works predominately emphasized on spectral efficiency enhancement, while the impact of such beamforming techniques on the communication reliability was largely unexplored and challenging to characterize. To fill this research gap and tackle this issue, this paper investigates outage capacity maximization for UAV tracking under the sensing-assisted predictive beamforming scheme. Specifically, a cellular-connected UAV tracking scheme is proposed leveraging extended Kalman filtering (EKF), where the predicted UAV trajectory, sensing duration ratio, and target constant received signal-to-noise ratio (SNR) are jointly optimized to maximize the outage capacity at each time slot. To address the implicit nature of the objective function, closed-form approximations of the outage probabilities (OPs) at both prediction and measurement stages of each time slot are proposed based on second-order Taylor expansions, providing an efficient and full characterization of outage capacity. Subsequently, an efficient algorithm is proposed based on a combination of bisection search and successive convex approximation (SCA) to address the non-convex optimization problem with guaranteed convergence. To further reduce computational complexity, a second efficient algorithm is developed based on alternating optimization (AO). Simulation results validate the accuracy of the derived OP approximations, the effectiveness of the proposed algorithms, and the significant outage capacity enhancement over various benchmarks, while also indicating a trade-off between decreasing path loss and enjoying wide beam coverage for outage capacity maximization.
Twisted Gabidulin codes are an extension of Gabidulin codes and have recently attracted great attention. In this paper, we study three classes of twisted Gabidulin codes with different twists. Moreover, we establish necessary and sufficient conditions for them to be maximum rank distance (MRD) codes, determine the conditions under which they are not MRD codes, and construct several classes of MRD codes via twisted Gabidulin codes. In addition, considering these codes in the Hamming metric, we provide necessary and sufficient conditions for them to be maximum distance separable (MDS), almost MDS, or near MDS. Finally, we investigate the covering radii and deep holes of twisted Gabidulin codes.
In this paper, we propose a sustainable long short-term memory (LSTM)-based precoding framework for reconfigurable intelligent surface (RIS)-assisted millimeter-wave (mmWave) MIMO systems. Instead of explicit channel state information (CSI) estimation, the framework exploits uplink pilot sequences to implicitly learn channel characteristics, reducing both pilot overhead and inference complexity. Practical hardware constraints are addressed by incorporating the phase-dependent amplitude model of RIS elements, while a multi-label training strategy improves robustness when multiple near-optimal codewords yield comparable performance. Simulations show that the proposed design achieves over 90% of the spectral efficiency of exhaustive search (ES) with only 2.2% of its computation time, cutting energy consumption by nearly two orders of magnitude. The method also demonstrates resilience under distribution mismatch and scalability to larger RIS arrays, making it a practical and energy-efficient solution for sustainable 6G wireless networks.
The advent of Rydberg atomic quantum receivers (RAQRs) offers a new solution for the evolution of wireless transceiver architecture, promising unprecedented sensitivity and immunity to thermal noise. However, RAQRs introduce a unique non-linear signal model based on biased phase retrieval, which complicates fundamental channel estimation tasks. Traditional iterative algorithms often struggle in low signal-to-noise regimes and fail to capture complex and non-ideal system characteristics. To address this, we propose a novel model-driven deep learning framework for channel estimation in RAQRs. Specifically, we propose a Transformer-based unrolling architecture, termed URformer, which is derived by unrolling a stabilized variant of the expectation-maximization Gerchberg-Saxton (EM-GS) algorithm. Specifically, each layer of the proposed URformer incorporates three trainable modules: 1) a learnable filter implemented by a neural network that replaces the fixed Bessel function ratio in the classic EM-GS algorithm; 2) a trainable gating mechanism that adaptively combines classic and model-based updates to ensure training stability; and 3) a efficient channel Transformer block that learns to correct residual errors by capturing non-local dependencies across the channel matrix. Numerical results demonstrate that the proposed URformer significantly outperforms classic iterative algorithms and conventional black-box neural networks with less pilot overhead.
We develop sharp bounds on the statistical distance between high-dimensional permutation mixtures and their i.i.d. counterparts. Our approach establishes a new geometric link between the spectrum of a complex channel overlap matrix and the information geometry of the channel, yielding tight dimension-independent bounds that close gaps left by previous work. Within this geometric framework, we also derive dimension-dependent bounds that uncover phase transitions in dimensionality for Gaussian and Poisson families. Applied to compound decision problems, this refined control of permutation mixtures enables sharper mean-field analyses of permutation-invariant decision rules, yielding strong non-asymptotic equivalence results between two notions of compound regret in Gaussian and Poisson models.
The paper explores three known methods, their variants and limitations, that can be used to obtain new entropy inequalities. The Copy Lemma was distilled from the original Zhang-Yeung construction which produced the first non-Shannon inequality. Its iterated version, effects of symmetrizations, and connections with polyhedral vertex enumeration are discussed. Another method, derived from the principle of maximum entropy, has the Copy Lemma as a special case. Nevertheless, none of the two presented variants is known to generate more inequalities than the iterated Copy Lemma. Finally, the Ahlswede-K\"orner method is shown to employ a hidden application of the Copy Lemma - the underlying lemma alone cannot generate new inequalities -, which makes this method strictly weaker than the Copy Lemma. The paper is written in a tutorial style and concludes with a list of open questions and research problems.
This paper investigates an information-theoretic model of secure semantic-aware communication. For this purpose, we consider the lossy joint source-channel coding (JSCC) of a memoryless semantic source transmitted over a memoryless wiretap channel. The source consists of two correlated parts that represent semantic and observed aspects of the information. Our model assumes separate fidelity and secrecy constraints on each source component and, in addition, encompasses two cases for the source output, in order to evaluate the performance gains if the encoder has an extended access to the source. Specifically, in Case 1, the encoder has direct access only to the samples from a single (observed) source component, while in Case 2 it has additional direct access to the samples of the underlaying semantic information. We derive single-letter converse and achievability bounds on the rate-distortion-equivocation region. The converse bound explicitly contains rate-distortion functions, making it easy to evaluate, especially for some common distributions. The proposed achievability coding scheme involves novel stochastic superposition coding with two private parts to enable analysis of the equivocation for each source component, separately. Our results generalise some of the previously established source and source-channel coding problems. The general results are further specialised to Gaussian and Bernoulli sources transmitted over Gaussian and binary wiretap channels, respectively. The numerical evaluations illustrate the derived bounds for these distributions.
This paper presents an innovative movable antenna (MA)-enhanced multi-user multiple-input multiple-output (MIMO) downlink system. We aim to maximize the energy efficiency (EE) under statistical channel state information (S-CSI) through a joint optimization of the precoding matrix and the antenna position vectors (APVs). To solve the resulting stochastic problem, we first resort to deterministic equivalent (DE) tecnology to formulate the deterministic minorizing function of the system EE and the deterministic function of each user terminal (UT)'s average achievable rate w.r.t. the transmit variables (i.e., the precoding matrix and the transmit APV) and the corresponding receive APV, respectively. Then, we propose an alternating optimization (AO) algorithm to alternatively optimize the transmit variables and the receive APVs to maximize the formulated deterministic objective functions, respectively. Finally, the above AO algorithm is tailored for the single-user scenario. Our numerical results reveal that, the proposed MA-enhanced system can significantly improve the system EE compared to several benchmark schemes based on the S-CSI and the optimal performance can be achieved with a finite size of movement regions for MAs.
In Vehicle-to-Everything (V2X) networks with multi-hop communication, Road Side Units (RSUs) intend to gather location data from the vehicles to offer various location-based services. Although vehicles use the Global Positioning System (GPS) for navigation, they may refrain from sharing their exact GPS coordinates to the RSUs due to privacy considerations. Thus, to address the localization expectations of the RSUs and the privacy concerns of the vehicles, we introduce a relaxed-privacy model wherein the vehicles share their partial location information in order to avail the location-based services. To implement this notion of relaxed-privacy, we propose a low-latency protocol for spatial-provenance recovery, wherein vehicles use correlated linear Bloom filters to embed their position information. Our proposed spatial-provenance recovery process takes into account the resolution of localization, the underlying ad hoc protocol, and the coverage range of the wireless technology used by the vehicles. Through a rigorous theoretical analysis, we present extensive analysis on the underlying trade-off between relaxed-privacy and the communication-overhead of the protocol. Finally, using a wireless testbed, we show that our proposed method requires a few bits in the packet header to provide security features such as localizing a low-power jammer executing a denial-of-service attack.
In [4] we describe a variation of the classical permutation decoding algorithm that can be applied to any binary affine-invariant code; in particular, it can be applied to first-order Reed-Muller codes successfully. In this paper we study how to implement it for the family of first-order Generalized Reed-Muller codes. Then, we give examples which show that we improve the number of errors we can correct in comparison with the known results for this family of codes. Finally, we deal, from a probabilistic point of view, with the problem of determining when the algorithm only needs to use a smaller PD-like set.
In this paper, we study partitions of finite modules induced by rank support and rank weight. First, we show that partitions induced by rank support are mutually dual with respect to suitable non-degenerate pairings, and hence are reflexive; moreover, we compute the associated generalized Krawtchouk matrices. Similar results are established for partitions induced by isomorphic relation of rank support. These results generalize counterpart results established for row space partitions and rank partitions of matrix spaces over finite fields. Next, we show that partitions of free modules over a finite chain ring $R$ induced by rank weight are non-reflexive provided that $R$ is not a field; moreover, we characterize the dual partitions explicitly. As a corollary, we show that rank partitions of matrix spaces over $R$ are reflexive if and only if $R$ is a field; moreover, two matrices belong to the same member of the dual partition if and only if their transposes are equivalent. In particular, we show that opposite to matrices over finite fields, rank metric does not induce an association scheme provided that $R$ is not a field, which further settles an open question proposed by Blanco-Chac\'{o}n, Boix, Greferath and Hieta-Aho in \cite{2}.
With the emergence of diverse and massive data in the upcoming sixth-generation (6G) networks, the task-agnostic semantic communication system is regarded to provide robust intelligent services. In this paper, we propose a task-agnostic learnable weighted-knowledge base semantic communication (TALSC) framework for robust image transmission to address the real-world heterogeneous data bias in KB, including label flipping noise and class imbalance. The TALSC framework incorporates a sample confidence module (SCM) as meta-learner and the semantic coding networks as learners. The learners are updated based on the empirical knowledge provided by the learnable weighted-KB (LW-KB). Meanwhile, the meta-learner evaluates the significance of samples according to the task loss feedback, and adjusts the update strategy of learners to enhance the robustness in semantic recovery for unknown tasks. To strike a balance between SCM parameters and precision of significance evaluation, we design an SCM-grid extension (SCM-GE) approach by embedding the Kolmogorov-Arnold networks (KAN) within SCM, which leverages the concept of spline refinement in KAN and enables scalable SCM with customizable granularity without retraining. Simulations demonstrate that the TALSC framework effectively mitigates the effects of flipping noise and class imbalance in task-agnostic image semantic communication, achieving at least 12% higher semantic recovery accuracy (SRA) and multi-scale structural similarity (MS-SSIM) compared to state-of-the-art methods.
We study the Non-Homogeneous Sequential Hypothesis Testing (NHSHT), where a single active Decision-Maker (DM) selects actions with heterogeneous positive costs to identify the true hypothesis under an average error constraint \(\delta\), while minimizing expected total cost paid. Under standard arguments, we show that the objective decomposes into the product of the mean number of samples and the mean per-action cost induced by the policy. This leads to a key design principle: one should optimize the ratio of expectations (expected information gain per expected cost) rather than the expectation of per-step information-per-cost ("bit-per-buck"), which can be suboptimal. We adapt the Chernoff scheme to NHSHT, preserving its classical \(\log 1/\delta\) scaling. In simulations, the adapted scheme reduces mean cost by up to 50\% relative to the classic Chernoff policy and by up to 90\% relative to the naive bit-per-buck heuristic.
Motivated by the constant modulus property of frequency shift keying (FSK) based waveforms and the stabilisation of its radar performance with an increase in the number of subpulses, in this paper an FSK-based dynamic subpulse number joint communications and radar waveform design is proposed. From a communications point of view, the system operates based on traditional FSK modulation. From a sensing point of view, although the subpulses are continuously generated and transmitted, radar waveforms are dynamically formed by monitoring the flatness of the spectrum which in turn guarantees the accuracy of the delay estimation. Other constraints on the waveform length are used to ensure satisfactory values of the root mean square time duration, ambiguity function sidelobe levels and prevent overly long waveforms. To provide an estimation of the probability of generating extremely long waveforms, the distribution of the number of subpulses is approximated using a Brownian motion process and an existing result on its one-sided exit density. Numerical examples are provided to evaluate the accuracy of the approximate distribution, as well as the ambiguity function sidelobe levels and the delay and Doppler shift estimation performance of the transmitted waveforms.
Estimation of an optical beam's transverse displacement is a canonical imaging problem fundamental to numerous optical imaging and sensing tasks. Quantum enhancements to the measurement precision in this problem have been studied extensively. However, previous studies have neither accounted for diffraction loss in full generality, nor have they addressed how to jointly optimize the spatial mode and the balance between squeezing and coherent amplitude. Here we show that, in the small-displacement limit, the seemingly intractable infinite-spatial-mode problem can be reduced to a compact three-mode interaction framework. We quantify the improvement afforded by an optimized single-spatial-mode Gaussian-state probe over the optimal classical laser probe, and show that a two-spatial-mode homodyne receiver is asymptotically optimal for the former in the limit of high probe energy. Our findings reveal a strategy for identifying quantum-optimal probes in the presence of generic multimode linear probe-target interaction and photon loss.
This paper develops a general approach to characterize the long-time trajectory behavior of nonconvex gradient descent in generalized single-index models in the large aspect ratio regime. In this regime, we show that for each iteration the gradient descent iterate concentrates around a deterministic vector called the `Gaussian theoretical gradient descent', whose dynamics can be tracked by a state evolution system of two recursive equations for two scalars. Our concentration guarantees hold universally for a broad class of design matrices and remain valid over long time horizons until algorithmic convergence or divergence occurs. Moreover, our approach reveals that gradient descent iterates are in general approximately independent of the data and strongly incoherent with the feature vectors, a phenomenon previously known as the `implicit regularization' effect of gradient descent in specific models under Gaussian data. As an illustration of the utility of our general theory, we present two applications of different natures in the regression setting. In the first, we prove global convergence of nonconvex gradient descent with general independent initialization for a broad class of structured link functions, and establish universality of randomly initialized gradient descent in phase retrieval for large aspect ratios. In the second, we develop a data-free iterative algorithm for estimating state evolution parameters along the entire gradient descent trajectory, thereby providing a low-cost yet statistically valid tool for practical tasks such as hyperparameter tuning and runtime determination. As a by-product of our analysis, we show that in the large aspect ratio regime, the Gaussian theoretical gradient descent coincides with a recent line of dynamical mean-field theory for gradient descent over the constant-time horizon.
In molecular communications (MC), inter-symbol interference (ISI) and noise are key factors that degrade communication reliability. Although time-domain equalization can effectively mitigate these effects, it often entails high computational complexity concerning the channel memory. In contrast, frequency-domain equalization (FDE) offers greater computational efficiency but typically requires prior knowledge of the channel model. To address this limitation, this letter proposes FDE techniques based on long short-term memory (LSTM) neural networks, enabling temporal correlation modeling in MC channels to improve ISI and noise suppression. To eliminate the reliance on prior channel information in conventional FDE methods, a supervised training strategy is employed for channel-adaptive equalization. Simulation results demonstrate that the proposed LSTM-FDE significantly reduces the bit error rate compared to traditional FDE and feedforward neural network-based equalizers. This performance gain is attributed to the LSTM's temporal modeling capabilities, which enhance noise suppression and accelerate model convergence, while maintaining comparable computational efficiency.