Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Holographic multiple-input multiple-output (MIMO) is envisioned as one of the most promising technology enablers for future sixth-generation (6G) networks. The use of electrically large holographic surface (HoloS) antennas has the potential to significantly boost the spatial multiplexing gain by increasing the number of degrees of freedom (DoF), even in line-of-sight (LoS) channels. In this context, the research community has shown a growing interest in characterizing the fundamental limits of this technology. In this paper, we compare the two analytical methods commonly utilized in the literature for this purpose: the cut-set integral and the self-adjoint operator. We provide a detailed description of both methods and discuss their advantages and limitations.
Quantum channel discrimination has been studied from an information-theoretic perspective, wherein one is interested in the optimal decay rate of error probabilities as a function of the number of unknown channel accesses. In this paper, we study the query complexity of quantum channel discrimination, wherein the goal is to determine the minimum number of channel uses needed to reach a desired error probability. To this end, we show that the query complexity of binary channel discrimination depends logarithmically on the inverse error probability and inversely on the negative logarithm of the (geometric and Holevo) channel fidelity. As a special case of these findings, we precisely characterize the query complexity of discriminating between two classical channels. We also provide lower and upper bounds on the query complexity of binary asymmetric channel discrimination and multiple quantum channel discrimination. For the former, the query complexity depends on the geometric R\'enyi and Petz R\'enyi channel divergences, while for the latter, it depends on the negative logarithm of (geometric and Uhlmann) channel fidelity. For multiple channel discrimination, the upper bound scales as the logarithm of the number of channels.
Movable antennas represent an emerging field in telecommunication research and a potential approach to achieving higher data rates in multiple-input multiple-output (MIMO) communications when the total number of antennas is limited. Most solutions and analyses to date have been limited to \emph{narrowband} setups. This work complements the prior studies by quantifying the benefit of using movable antennas in \emph{wideband} MIMO communication systems. First, we derive a novel uplink wideband system model that also accounts for distortion from transceiver hardware impairments. We then formulate and solve an optimization task to maximize the average sum rate by adjusting the antenna positions using particle swarm optimization. Finally, the performance with movable antennas is compared with fixed uniform arrays and the derived theoretical upper bound. The numerical study concludes that the data rate improvement from movable antennas over other arrays heavily depends on the level of hardware impairments, the richness of the multi-path environments, and the number of subcarriers. The present study provides vital insights into the most suitable use cases for movable antennas in future wideband systems.
In this paper, we study linear codes over $\mathbb{Z}_k$ based on lattices and theta functions. We obtain the complete weight enumerators MacWilliams identity and the symmetrized weight enumerators MacWilliams identity based on the theory of theta function. We extend the main work by Bannai, Dougherty, Harada and Oura to the finite ring $\mathbb{Z}_k$ for any positive integer $k$ and present the complete weight enumerators MacWilliams identity in genus $g$. When $k=p$ is a prime number, we establish the relationship between the theta function of associated lattices over a cyclotomic field and the complete weight enumerators with Hamming weight of codes, which is an analogy of the results by G. Van der Geer and F. Hirzebruch since they showed the identity with the Lee weight enumerators.
Constraint-based causal discovery algorithms utilize many statistical tests for conditional independence to uncover networks of causal dependencies. These approaches to causal discovery rely on an assumed correspondence between the graphical properties of a causal structure and the conditional independence properties of observed variables, known as the causal Markov condition and faithfulness. Finite data yields an empirical distribution that is "close" to the actual distribution. Across these many possible empirical distributions, the correspondence to the graphical properties can break down for different conditional independencies, and multiple violations can occur at the same time. We study this "meta-dependence" between conditional independence properties using the following geometric intuition: each conditional independence property constrains the space of possible joint distributions to a manifold. The "meta-dependence" between conditional independences is informed by the position of these manifolds relative to the true probability distribution. We provide a simple-to-compute measure of this meta-dependence using information projections and consolidate our findings empirically using both synthetic and real-world data.
The storage capacity of a graph measures the maximum amount of information that can be stored across its vertices, such that the information at any vertex can be recovered from the information stored at its neighborhood. The study of this graph quantity is motivated by applications in distributed storage and by its intimate relations to the index coding problem from the area of network information theory. In the latter, one wishes to minimize the amount of information that has to be transmitted to a collection of receivers, in a way that enables each of them to discover its required data using some prior side information. In this paper, we initiate the study of the Storage Capacity and Index Coding problems from the perspective of parameterized complexity. We prove that the Storage Capacity problem parameterized by the solution size admits a kernelization algorithm producing kernels of linear size. We also provide such a result for the Index Coding problem, in the linear and non-linear settings, where it is parameterized by the dual value of the solution, i.e., the length of the transmission that can be saved using the side information. A key ingredient in the proofs is the crown decomposition technique due to Chor, Fellows, and Juedes (WG 2003, WG 2004). As an application, we significantly extend an algorithmic result of Dau, Skachek, and Chee (IEEE Trans. Inform. Theory, 2014).
ReLU is a widely used activation function in deep neural networks. This paper explores the stability properties of the ReLU map. For any weight matrix $\boldsymbol{A} \in \mathbb{R}^{m \times n}$ and bias vector $\boldsymbol{b} \in \mathbb{R}^{m}$ at a given layer, we define the condition number $\beta_{\boldsymbol{A},\boldsymbol{b}}$ as $\beta_{\boldsymbol{A},\boldsymbol{b}} = \frac{\mathcal{U}_{\boldsymbol{A},\boldsymbol{b}}}{\mathcal{L}_{\boldsymbol{A},\boldsymbol{b}}}$, where $\mathcal{U}_{\boldsymbol{A},\boldsymbol{b}}$ and $\mathcal{L}_{\boldsymbol{A},\boldsymbol{b}}$ are the upper and lower Lipschitz constants, respectively. We first demonstrate that for any given $\boldsymbol{A}$ and $\boldsymbol{b}$, the condition number satisfies $\beta_{\boldsymbol{A},\boldsymbol{b}} \geq \sqrt{2}$. Moreover, when the weights of the network at a given layer are initialized as random i.i.d. Gaussian variables and the bias term is set to zero, the condition number asymptotically approaches this lower bound. This theoretical finding suggests that Gaussian weight initialization is optimal for preserving distances in the context of random deep neural network weights.
Federated Learning (FL) has emerged as a promising framework for distributed learning, but its growing complexity has led to significant energy consumption, particularly from computations on the client side. This challenge is especially critical in energy-harvesting FL (EHFL) systems, where device availability fluctuates due to limited and time-varying energy resources. We propose FedBacys, a battery-aware FL framework that introduces cyclic client participation based on users' battery levels to cope with these issues. FedBacys enables clients to save energy and strategically perform local training just before their designated transmission time by clustering clients and scheduling their involvement sequentially. This design minimizes redundant computation, reduces system-wide energy usage, and improves learning stability. Our experiments demonstrate that FedBacys outperforms existing approaches in terms of energy efficiency and performance consistency, exhibiting robustness even under non-i.i.d. training data distributions and with very infrequent battery charging. This work presents the first comprehensive evaluation of cyclic client participation in EHFL, incorporating both communication and computation costs into a unified, resource-aware scheduling strategy.
The task of constructing infinite families of self-dual codes with unbounded lengths and minimum distances exhibiting square-root lower bounds is extremely challenging, especially when it comes to cyclic codes. Recently, the first infinite family of Euclidean self-dual binary and nonbinary cyclic codes, whose minimum distances have a square-root lower bound and have a lower bound better than square-root lower bounds are constructed in \cite{Chen23} for the lengths of these codes being unbounded. Let $q$ be a power of a prime number and $Q=q^2$. In this paper, we first improve the lower bounds on the minimum distances of Euclidean and Hermitian duals of BCH codes with length $\frac{q^m-1}{q^s-1}$ over $\mathbb{F}_q$ and $\frac{Q^m-1}{Q-1}$ over $\mathbb{F}_Q$ in \cite{Fan23,GDL21,Wang24} for the designed distances in some ranges, respectively, where $\frac{m}{s}\geq 3$. Then based on matrix-product construction and some lower bounds on the minimum distances of BCH codes and their duals, we obtain several classes of Euclidean and Hermitian self-dual codes, whose minimum distances have square-root lower bounds or a square-root-like lower bounds. Our lower bounds on the minimum distances of Euclidean and Hermitian self-dual cyclic codes improved many results in \cite{Chen23}. In addition, our lower bounds on the minimum distances of the duals of BCH codes are almost $q^s-1$ or $q$ times that of the existing lower bounds.
In this paper, two decoding algorithms based on Successive Cancellation (SC) are proposed to improve the error-correction performance of cyclic redundancy check (CRC)-aided polar codes while aiming for a low-complexity implementation. Comparisons with Dynamic SC Flip (DSCF) and SC Perturbation (SCP) are carried out since the proposed DSCF and Perturbation (DSCFP) and Perturbed DSCF (PDSCF) algorithms combine both methods. The analysis includes comparisons with several code lengths $N$ and various number of decoding attempts $T_{max}$. For $N=1024$ and the coding rate $R=\frac{1}{2}$, the DSCFP and the SCP algorithms with $T_{max}=17$ are bested by approximately $0.1$\,dB at block error rate (BLER) of $0.001$. At $\text{BLER}=10^{-6}$ and for $T_{max}=64$, the gain is of $0.375$ dB and $>0.5$ dB with respect to DSCF and SCP, respectively. At high signal-to-noise ratio, the average computational complexity of the proposed algorithms is virtually equivalent to that of SC.
Polar codes are a class of linear error-correction codes that have received a lot of attention due to their ability to achieve channel capacity in an arbitrary binary discrete memoryless channel (B-DMC) with low-complexity successive-cancellation (SC) decoding. However, practical implementations often require better error-correction performance than what SC decoding provides, particularly at short to moderate code lengths. Successive-cancellation flip (SCF) decoding algorithm was proposed to improve error-correction performance with an aim to detect and correct the first wrongly estimated bit in a codeword before resuming SC decoding. At each additional SC decoding trial, i.e., decoding attempt beyond the initial unsuccessful trial, one bit estimated as the least reliable is flipped. Dynamic SCF (DSCF) is a variation of SCF, where multiple bits may be flipped simultaneously per trial. Despite the improved error-correction performance compared to the SC decoder, SCF-based decoders have variable execution time, which leads to high average execution time and latency. In this work, we propose the generalized restart mechanism (GRM) that allows to skip decoding computations that are identical between the initial trial and any additional trial. Under DSCF decoding with up to 3-bit flips per decoding trial, our proposed GRM is shown to reduce the average execution time by 25% to 60% without any negative effect on error-correction performance. The proposed mechanism is adaptable to state-of-the-art latency-reduction techniques. When applied to Fast-DSCF-3 decoding, the additional reduction brought by the GRM is 15% to 22%. For the DSCF-3 decoder, the proposed mechanism requires approximately 4% additional memory.
Recent developments in Integrated Sensing and Communication have led to new adversarial models in wireless security through Integrated Sensing and Jamming (ISAJ) adversaries. ISAJ adversaries, owing to their sensing capabilities, are known to inject jamming energy over the victim's frequency band, and also use generalized energy measurements on various network frequencies to detect the presence of countermeasures. Existing countermeasures against such ISAJ adversaries are laid under the assumption that the adversary does not have the knowledge of the countermeasure. However, according to Kerchoffs' principle in cryptography, security of a countermeasure should only rely on the secret-keys, not on the obfuscation of the countermeasure. On testing the security of existing countermeasures, we observe that they violate Kerchoffs' principle, thus motivating the need for new countermeasures. In this regard, we propose a novel network-centric countermeasure against ISAJ adversaries, wherein a group of users in the network assist the victim to reliably communicate her messages in a covert manner. Firstly, we analyse the error performance of the proposed countermeasure, and study its behavior on the number of assisting users in the network. Subsequently, to validate its security against Kerchoffs' principle, we study the Shannon's entropy associated with the presence of the victim's messages in the network and analyse its behaviour as a function of the number of assisting users. Finally, to study the interplay between reliability and covertness, we pose interesting optimization problems and solve them to choose the underlying parameters of the countermeasure and the number of assisting users.
Compositional graphoids are fundamental discrete structures which appear in probabilistic reasoning, particularly in the area of graphical models. They are semigraphoids which satisfy the Intersection and Composition properties. These important properties, however, are not enjoyed by general probability distributions. We survey what is known in terms of sufficient conditions for Intersection and Composition and derive a set of new sufficient conditions in the context of discrete random variables based on conditional information inequalities for Shannon entropies.
The paper presents a comprehensive study of group codes from non-abelian split metacyclic group algebras. We derive an explicit Wedderburn-like decomposition of finite split metacyclic group algebras over fields with characteristic coprime to the group order. Utilizing this decomposition, we develop a systematic theory of metacyclic codes, providing their algebraic description and proving that they can be viewed as generalized concatenated codes with cyclic inner codes and skew quasi-cyclic outer codes. We establish bounds on the minimum distance of metacyclic codes and investigate the class of induced codes. Furthermore, we show the feasibility of constructing a partial key-recovery attack against certain McEliece-type cryptosystems based on metacyclic codes by exploiting their generalized concatenated structure.
Distributed Arithmetic Coding (DAC) has emerged as a feasible solution to the Slepian-Wolf problem, particularly in scenarios with non-stationary sources and for data sequences with lengths ranging from small to medium. Due to the inherent decoding ambiguity in DAC, the number of candidate paths grows exponentially with the increase in source length. To select the correct decoding path from the set of candidates, DAC decoders utilize the Maximum A Posteriori (MAP) metric to rank the decoding sequences, outputting the path with the highest MAP metric as the decoding result of the decoder. However, this method may still inadvertently output incorrect paths that have a MAP metric higher than the correct decoding path, despite not being the correct decoding path. To address the issue, we propose Distributed Arithmetic Coding Aided by Linear Codes (DALC), which employs linear codes to constrain the decoding process, thereby eliminating some incorrect paths and preserving the correct one. During the encoding phase, DALC generates the parity bits of the linear code for encoding the source data. In the decoding phase, each path in the set of candidate paths is verified in descending order according to the MAP metric until a path that meets the verification criteria is encountered, which is then outputted as the decoding result. DALC enhances the decoding performance of DAC by excluding candidate paths that do not meet the constraints imposed by linear codes. Our experimental results demonstrate that DALC reduces the Bit Error Rate(BER), with especially improvements in skewed source data scenarios.
With the growing density of wireless networks and demand for multi-hop transmissions, precise delay Quality of Service (QoS) analysis has become a critical challenge. This paper introduces a multi-hop delay QoS analysis framework based on the sliding block martingale, addressing the loose boundary issue of prior methods that rely on service process martingales and min-plus transformations. By constructing a sliding block martingale with a window, we capture both long-term trends and short-term fluctuations in the backlog, eliminating the reliance on the generalized incremental property. The framework redefines delay unreliability events using cascading attributes, deriving a more compact Delay Unreliability Probability Boundary (DUPB). To improve the efficiency of solving the key parameter $\theta$, we propose a Micrometric Intervals based Supermartingale Upcrossing Estimate Theorem, quantifying the upper bound of event occurrence frequency to constrain the solution space of $\theta$. Simulations based on the 3GPP UMa/UMi channel model validate the framework's effectiveness. Results show that in 2-7 hop scenarios, the maximum deviation between theoretical boundaries and Monte Carlo simulations is $4.116 \times 10^{-5}$, with a lower RMSE than existing methods. Iteration count and CPU time for solving $\theta$ are reduced by $59\%-72\%$ and $60.6\%-70.5\%$, respectively, improving analysis efficiency. Furthermore, the derived minimum service rate for multi-hop queues offers a valuable reference for resource allocation. The framework demonstrates high accuracy, scalability, and practicality in complex multi-hop networks.
Integrated heterogeneous service provisioning (IHSP) is a promising paradigm that is designed to concurrently support a variety of heterogeneous services, extending beyond sensing and communication to meet the diverse needs of emerging applications. However, a primary challenge of IHSP is addressing the conflicts between multiple competing service demands under constrained resources. In this paper, we overcome this challenge by the joint use of two novel elastic design strategies: compromised service value assessment and flexible multi-dimensional resource multiplexing. Consequently, we propose a value-prioritized elastic multi-dimensional multiple access (MDMA) mechanism for IHSP systems. First, we modify the Value-of-Service (VoS) metric by incorporating elastic parameters to characterize user-specific tolerance and compromise in response to various performance degradations under constrained resources. This VoS metric serves as the foundation for prioritizing services and enabling effective fairness service scheduling among concurrent competing demands. Next, we adapt the MDMA to elastically multiplex services using appropriate multiple access schemes across different resource domains. This protocol leverages user-specific interference tolerances and cancellation capabilities across different domains to reduce resource-demanding conflicts and co-channel interference within the same domain. Then, we maximize the system's VoS by jointly optimizing MDMA design and power allocation. Since this problem is non-convex, we propose a monotonic optimization-assisted dynamic programming (MODP) algorithm to obtain its optimal solution. Additionally, we develop the VoS-prioritized successive convex approximation (SCA) algorithm to efficiently find its suboptimal solution. Finally, simulations are presented to validate the effectiveness of the proposed designs.
Quantum network applications impose a variety of requirements on entanglement resources in terms of rate, fidelity, latency, and more. The repeaters in the quantum network must combine good methods for entanglement generation, effective entanglement distillation, and smart routing protocols to satisfy these application requirements. In this work, we focus on quantum error correction-based entanglement distillation in a linear chain of quantum repeaters. While conventional approaches reuse the same distillation scheme over multiple hop lengths after entanglement swaps, we propose a novel adaptive error correction scheme that boosts end-to-end metrics. Specifically, depending on the network operation point, we adapt the code used in distillation over successive rounds to monotonically increase the rate while also improving fidelity. We demonstrate the effectiveness of this strategy using three codes: [[9,1,3]], [[9,2,3]], [[9,3,3]]. We compare the performance of four different protocols that combine the codes in different ways, where we define a new performance metric, efficiency, that incorporates both overall rate and fidelity. While we highlight our innovation under minimal assumptions on noise, the method can be easily generalized to realistic network settings. By combining our approach with good entanglement generation methods and smart routing protocols, we can achieve application requirements in a systematic, resource-efficient, way.