Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
By strengthening known results about primitivity-blocking words in free groups, we prove that for any nontrivial element w of a free group of finite rank, there are words that cannot be subwords of any cyclically reduced automorphic image of w. This has implications for the average-case complexity of Whitehead's problem.
We study the computational complexity of estimating the quantum $\ell_{\alpha}$ distance ${\mathrm{T}_\alpha}(\rho_0,\rho_1)$, defined via the Schatten $\alpha$-norm $\|A\|_{\alpha} = \mathrm{tr}(|A|^{\alpha})^{1/\alpha}$, given $\operatorname{poly}(n)$-size state-preparation circuits of $n$-qubit quantum states $\rho_0$ and $\rho_1$. This quantity serves as a lower bound on the trace distance for $\alpha > 1$. For any constant $\alpha > 1$, we develop an efficient rank-independent quantum estimator for ${\mathrm{T}_\alpha}(\rho_0,\rho_1)$ with time complexity $\operatorname{poly}(n)$, achieving an exponential speedup over the prior best results of $\exp(n)$ due to Wang, Guan, Liu, Zhang, and Ying (TIT 2024). Our improvement leverages efficiently computable uniform polynomial approximations of signed positive power functions within quantum singular value transformation, thereby eliminating the dependence on the rank of the quantum states. Our quantum algorithm reveals a dichotomy in the computational complexity of the Quantum State Distinguishability Problem with Schatten $\alpha$-norm (QSD$_{\alpha}$), which involves deciding whether ${\mathrm{T}_\alpha}(\rho_0,\rho_1)$ is at least $2/5$ or at most $1/5$. This dichotomy arises between the cases of constant $\alpha > 1$ and $\alpha=1$: - For any $1+\Omega(1) \leq \alpha \leq O(1)$, QSD$_{\alpha}$ is $\mathsf{BQP}$-complete. - For any $1 \leq \alpha \leq 1+\frac{1}{n}$, QSD$_{\alpha}$ is $\mathsf{QSZK}$-complete, implying that no efficient quantum estimator for $\mathrm{T}_\alpha(\rho_0,\rho_1)$ exists unless $\mathsf{BQP} = \mathsf{QSZK}$. The hardness results follow from reductions based on new rank-dependent inequalities for the quantum $\ell_{\alpha}$ distance with $1\leq \alpha \leq \infty$, which are of independent interest.
In this paper, we study a generalization of the House Allocation problem. In our problem, agents are represented by vertices of a graph $\GG_{\mathcal{A}} = (\AA, E_\AA)$, and each agent $a \in \AA$ is associated with a set of preferred houses $\PP_a \subseteq \HH$, where $\AA$ is the set of agents and $\HH$ is the set of houses. A house allocation is an injective function $\phi: \AA \rightarrow \HH$, and an agent $a$ envies a neighbour $a' \in N_{\GG_\AA}(a)$ under $\phi$ if $\phi(a) \notin \PP_a$ and $\phi(a') \in \PP_a$. We study two natural objectives: the first problem called \ohaa, aims to compute an allocation that minimizes the number of envious agents; the second problem called \ohaah aims to maximize, among all minimum-envy allocations, the number of agents who are assigned a house they prefer. These two objectives capture complementary notions of fairness and individual satisfaction. We design polynomial time algorithms for both problems for the variant when each agent prefers exactly one house. On the other hand, when the list of preferred houses for each agent has size at most $2$ then we show that both problems are \NP-hard even when the agent graph $\GG_\AA$ is a complete bipartite graph. We also show that both problems are \NP-hard even when the number $|\mathcal H|$ of houses is equal to the number $|\mathcal A|$ of agents. This is in contrast to the classical {\sc House Allocation} problem, where the problem is polynomial time solvable when $|\mathcal H| = |\mathcal A|$. The two problems are also \NP-hard when the agent graph has a small vertex cover. On the positive side, we design exact algorithms that exploit certain structural properties of $\GG_{\AA}$ such as sparsity, existence of balanced separators or existence of small-sized vertex covers, and perform better than the naive brute-force algorithm.
In the $k$-Orthogonal Vectors ($k$-OV) problem we are given $k$ sets, each containing $n$ binary vectors of dimension $d=n^{o(1)}$, and our goal is to pick one vector from each set so that at each coordinate at least one vector has a zero. It is a central problem in fine-grained complexity, conjectured to require $n^{k-o(1)}$ time in the worst case. We propose a way to \emph{plant} a solution among vectors with i.i.d. $p$-biased entries, for appropriately chosen $p$, so that the planted solution is the unique one. Our conjecture is that the resulting $k$-OV instances still require time $n^{k-o(1)}$ to solve, \emph{on average}. Our planted distribution has the property that any subset of strictly less than $k$ vectors has the \emph{same} marginal distribution as in the model distribution, consisting of i.i.d. $p$-biased random vectors. We use this property to give average-case search-to-decision reductions for $k$-OV.
We study a discrete convolution streaming problem. An input arrives as a stream of numbers $z = (z_0,z_1,z_2,\ldots)$, and at time $t$ our goal is to output $(Tz)_t$ where $T$ is a lower-triangular Toeplitz matrix. We focus on space complexity; the algorithm can store a buffer of $\beta(t)$ numbers in order to achieve this goal. We characterize space complexity when algorithms perform algebraic operations. The matrix $T$ corresponds to a generating function $G(x)$. If $G(x)$ is rational of degree $d$, then it is known that the space complexity is at most $O(d)$. We prove a corresponding lower bound; the space complexity is at least $\Omega(d)$. In addition, for irrational $G(x)$, we prove that the space complexity is infinite. We also provide finite-time guarantees. For example, for the generating function $\frac{1}{\sqrt{1-x}}$ that was studied in various previous works in the context of differentially private continual counting, we prove a sharp lower bound on the space complexity; at time $t$, it is at least $\Omega(t)$.
We study the communication complexity of the Minimum Vertex Cover (MVC) problem on general graphs within the \(k\)-party one-way communication model. Edges of an arbitrary \(n\)-vertex graph are distributed among \(k\) parties. The objective is for the parties to collectively find a small vertex cover of the graph while adhering to a communication protocol where each party sequentially sends a message to the next until the last party outputs a valid vertex cover of the whole graph. We are particularly interested in the trade-off between the size of the messages sent and the approximation ratio of the output solution. It is straightforward to see that any constant approximation protocol for MVC requires communicating \(\Omega(n)\) bits. Additionally, there exists a trivial 2-approximation protocol where the parties collectively find a maximal matching of the graph greedily and return the subset of vertices matched. This raises a natural question: \textit{What is the best approximation ratio achievable using optimal communication of \(O(n)\)?} We design a protocol with an approximation ratio of \((2-2^{-k+1}+\epsilon)\) and \(O(n)\) communication for any desirably small constant \(\epsilon>0\), which is strictly better than 2 for any constant number of parties. Moreover, we show that achieving an approximation ratio smaller than \(3/2\) for the two-party case requires \(n^{1 + \Omega(1/\lg\lg n)}\) communication, thereby establishing the tightness of our protocol for two parties.
Let $\mathcal{R} = \mathbb{K}[x_1, \dots, x_n]$ be a multivariate polynomial ring over a field $\mathbb{K}$ of characteristic 0. Consider $n$ algebraically independent elements $g_1, \dots, g_n$ in $\mathcal{R}$. Let $\mathcal{S}$ denote the subring of $\mathcal{R}$ generated by $g_1, \dots, g_n$, and let $h$ be an element of $\mathcal{S}$. Then, there exists a unique element ${f} \in \mathbb{K}[u_1, \dots, u_n]$ such that $h = f(g_1, \dots, g_n)$. In this paper, we provide an algorithm for computing ${f}$, given $h$ and $g_1, \dots, g_n$. The complexity of our algorithm is linear in the size of the input, $h$ and $g_1, \dots, g_n$, and polynomial in $n$ when the degree of $f$ is fixed. Previous works are mostly known when $f$ is a symmetric polynomial and $g_1, \dots, g_n$ are elementary symmetric, homogeneous symmetric, or power symmetric polynomials.
Substructural logics are formal logical systems that omit familiar structural rules of classical and intuitionistic logic such as contraction, weakening, exchange (commutativity), and associativity. This leads to a resource-sensitive logical framework that has proven influential beyond mathematical logic and its algebraic semantics, across theoretical computer science, linguistics, and philosophical logic. The set of theorems of a substructural logic is recursively enumerable and, in many cases, recursive. These logics also possess an intricate mathematical structure that has been the subject of research for over six decades. We undertake a comprehensive study of substructural logics possessing an underlying well-quasi-order (wqo), using established ordinal-indexed fast-growing complexity classes to classify the complexity of their deducibility (quasiequational) and provability (equational) problems. This includes substructural logics with weak variants of contraction and weakening, and logics with weak or even no exchange. We further consider infinitely many axiomatic extensions over the base systems. We establish a host of decidability and complexity bounds, many of them tight, by developing new techniques in proof theory, well-quasi-order theory (contributing new length theorems), the algebraic semantics of substructural logics via residuated lattices, algebraic proof theory, and novel encodings of counter machines. Classifying the computational complexity of substructural logics (and the complexity of the word problem and of the equational theory of their algebraic semantics) reveals how subtle variations in their design influence their algorithmic behavior, with the decision problems often reaching Ackermannian or even hyper-Ackermannian complexity.
Given a graph $G$, a set $T$ of terminal vertices, and a demand graph $H$ on $T$, the \textsc{Multicut} problem asks for a set of edges of minimum weight that separates the pairs of terminals specified by the edges of $H$. The \textsc{Multicut} problem can be solved in polynomial time if the number of terminals and the genus of the graph is bounded (Colin de Verdi\`ere [Algorithmica, 2017]). Focke et al.~[SoCG 2024] characterized which special cases of Multicut are fixed-parameter tractable parameterized by the number of terminals on planar graphs. Moreover, they precisely determined how the parameter genus influences the complexity and presented partial results of this form for graphs that can be made planar by the deletion of $\pi$ edges. We complete the picture on how this parameter $\pi$ influences the complexity of different special cases and precisely determine the influence of the crossing number. Formally, let $\mathcal{H}$ be any class of graphs (satisfying a mild closure property) and let Multicut$(\mathcal{H})$ be the special case when the demand graph $H$ is in $\mathcal{H}$. Our fist main results is showing that if $\mathcal{H}$ has the combinatorial property of having bounded distance to extended bicliques, then Multicut$(\mathcal{H})$ on unweighted graphs is FPT parameterized by the number $t$ of terminals and $\pi$. For the case when $\mathcal{H}$ does not have this combinatorial property, Focke et al.~[SoCG 2024] showed that $O(\sqrt{t})$ is essentially the best possible exponent of the running time; together with our result, this gives a complete understanding of how the parameter $\pi$ influences complexity on unweighted graphs. Our second main result is giving an algorithm whose existence shows that that the parameter crossing number behaves analogously if we consider weighted graphs.
This paper investigates concurrency-constrained scheduling problems, where the objective is to construct a schedule for a set of jobs subject to concurrency restrictions. Formally, we are given a conflict graph $G$ defined over a set of $n$ jobs, where an edge between two jobs in $G$ indicates that these jobs cannot be executed concurrently. Each job may have distinct attributes, such as processing time, due date, weight, and release time. The goal is to determine a schedule that optimizes a specified scheduling criterion while adhering to all concurrency constraints. This framework offers a versatile model for analyzing resource allocation problems where processes compete for shared resources, such as access to shared memory. From a theoretical perspective, it encompasses several classical graph coloring problems, including Chromatic Number, Sum Coloring, and Interval Chromatic Number. Given that even the simplest concurrency-constrained scheduling problems are NP-hard for general conflict graphs, this study focuses on conflict graphs with bounded treewidth. Our results establish a dichotomy: Some problems in this setting can be solved in FPT time, while others are shown to be XALP-complete for treewidth as parameter. Along the way, we generalize several previously known results on coloring problems for bounded treewidth graphs. Several of the FPT algorithms are based on the insight that completion times are bounded by the Grundy number of the conflict graph - the fact that this number is bounded by the product of treewidth and the logarithm of the number of vertices then leads to the FPT time bound.
This white paper demonstrates that reverse engineering Unidentified Aerial Phenomena (UAP) is NP-complete under classical computational paradigms. By modeling UAP reconstruction as an automaton identification problem with a state characterization matrix M(D, T, E) and examining the inherent challenges in data gathering as well as unknown physics, we show that inferring internal mechanisms (such as Isotopically-Engineered-Materials or unconventional propulsion systems) from finite observational data is computationally intractable. Data D, comprising both operational non-reproducible observations and reproducible analysis data from purported crash retrievals, remains inherently fragmentary. Even if UAP observables were reproducible, the absence of a comprehensive theoretical framework ensures that reverse engineering remains NP-complete, and may escalate to PSPACE-hard or to an Entscheidungsproblem. This intractability challenges current UAP reverse engineering efforts and has profound implications for transparency on UAP technology and related venture investments. Hence, UAP are as analogous to modern smartphones in the hands of Neanderthals.
Williams (STOC 2025) recently proved that time-$t$ multitape Turing machines can be simulated using $O(\sqrt{t \log t})$ space using the Cook-Mertz (STOC 2024) tree evaluation procedure. As Williams notes, applying this result to fast algorithms for the circuit value problem implies an $O(\sqrt{s} \cdot \mathrm{polylog}\; s)$ algorithm for evaluating size $s$ circuits. In this work, we provide a direct reduction from circuit value to tree evaluation without passing through Turing machines, simultaneously improving the bound to $O(\sqrt{s \log s})$ space and providing a proof with fewer abstraction layers. This result can be thought of as a "sibling" result to Williams' for circuit complexity instead of time; in particular, using the fact that time-$t$ Turing machines have size $O(t \log t)$ circuits, we can recover a slightly weakened version of Williams' result, simulating time-$t$ machines in space $O(\sqrt{t} \log t)$.
We give a new framework based on graph regularity lemmas, for list decoding and list recovery of codes based on spectral expanders. Using existing algorithms for computing regularity decompositions of sparse graphs in (randomized) near-linear time, and appropriate choices for the constant-sized inner/base codes, we prove the following: - Expander-based codes constructed using the distance amplification technique of Alon, Edmonds and Luby [FOCS 1995] with rate $\rho$, can be list decoded to a radius $1 - \rho - \epsilon$ in near-linear time. By known results, the output list has size $O(1/\epsilon)$. - The above codes of Alon, Edmonds and Luby, with rate $\rho$, can also be list recovered to radius $1 - \rho - \epsilon$ in near-linear time, with constant-sized output lists. - The Tanner code construction of Sipser and Spielman [IEEE Trans. Inf. Theory 1996] with distance $\delta$, can be list decoded to radius $\delta - \epsilon$ in near-linear time, with constant-sized output lists. Our results imply novel combinatorial as well as algorithmic bounds for each of the above explicit constructions. All of these bounds are obtained via combinatorial rigidity phenomena, proved using (weak) graph regularity. The regularity framework allows us to lift the list decoding and list recovery properties for the local base codes, to the global codes obtained via the above constructions.
The class TotP consists of functions that count the number of all paths of a nondeterministic polynomial-time Turing machine. In this paper, we give a second definition of the class TotP in terms of certificates and verifiers, and present a few natural closure properties of this class. We also prove that the closure of TotP under left composition with the class FP+ is equivalent to TotP = FP+ and P = PP, and give examples of FP+-functions such that if TotP is closed under composition with them, then it is closed under composition with FP+.
We introduce the magic hierarchy, a quantum circuit model that alternates between arbitrary-sized Clifford circuits and constant-depth circuits with two-qubit gates ($\textsf{QNC}^0$). This model unifies existing circuit models, such as $\textsf{QAC}^0_f$ and models with adaptive intermediate measurements. Despite its generality, we are able to prove nontrivial lower bounds. We prove new lower bounds in the first level of the hierarchy, showing that certain explicit quantum states cannot be approximately prepared by circuits consisting of a Clifford circuit followed by $\textsf{QNC}^0$. These states include ground states of some topologically ordered Hamiltonians and nonstabilizer quantum codes. Our techniques exploit the rigid structure of stabilizer codes and introduce an infectiousness property: if even a single state in a high distance code can be approximately prepared by one of these circuits, then the entire subspace must lie close to a perturbed stabilizer code. We also show that proving state preparation lower bounds beyond a certain level of the hierarchy would imply classical circuit lower bounds beyond the reach of current techniques in complexity theory. More broadly, our techniques go beyond lightcone-based methods and highlight how the magic hierarchy provides a natural framework for connecting circuit complexity, condensed matter, and Hamiltonian complexity.
We study the complexity of satisfiability problems in probabilistic and causal reasoning. Given random variables $X_1, X_2,\ldots$ over finite domains, the basic terms are probabilities of propositional formulas over atomic events $X_i = x_i$, such as $P(X_1 = x_1)$ or $P(X_1 = x_1 \vee X_2 = x_2)$. The basic terms can be combined using addition (yielding linear terms) or multiplication (polynomial terms). The probabilistic satisfiability problem asks whether a joint probability distribution satisfies a Boolean combination of (in)equalities over such terms. Fagin et al. (1990) showed that for basic and linear terms, this problem is NP-complete, making it no harder than Boolean satisfiability, while Moss\'e et al. (2022) proved that for polynomial terms, it is complete for the existential theory of the reals. Pearl's Causal Hierarchy (PCH) extends the probabilistic setting with interventional and counterfactual reasoning, enriching the expressiveness of languages. However, Moss\'e et al. (2022) found that satisfiability complexity remains unchanged. Van der Zander et al. (2023) showed that introducing a marginalization operator to languages induces a significant increase in complexity. We extend this line of work by adding two new dimensions to the problem by constraining the models. First, we fix the graph structure of the underlying structural causal model, motivated by settings like Pearl's do-calculus, and give a nearly complete landscape across different arithmetics and PCH levels. Second, we study small models. While earlier work showed that satisfiable instances admit polynomial-size models, this is no longer guaranteed with compact marginalization. We characterize the complexities of satisfiability under small-model constraints across different settings.
In this paper, we exhibit an $\textsf{AC}^{3}$ isomorphism test for groups without Abelian normal subgroups (a.k.a. Fitting-free groups), a class for which isomorphism testing was previously known to be in $\mathsf{P}$ (Babai, Codenotti, and Qiao; ICALP '12). Here, we leverage the fact that $G/\text{PKer}(G)$ can be viewed as permutation group of degree $O(\log |G|)$. As $G$ is given by its multiplication table, we are able to implement the solution for the corresponding instance of Twisted Code Equivalence in $\textsf{AC}^{3}$. In sharp contrast, we show that when our groups are specified by a generating set of permutations, isomorphism testing of Fitting-free groups is at least as hard as Graph Isomorphism and Linear Code Equivalence (the latter being $\textsf{GI}$-hard and having no known subexponential-time algorithm). Lastly, we show that any Fitting-free group of order $n$ is identified by $\textsf{FO}$ formulas (without counting) using only $O(\log \log n)$ variables. This is in contrast to the fact that there are infinite families of Abelian groups that are not identified by $\textsf{FO}$ formulas with $o(\log n)$ variables (Grochow & Levet, FCT '23).
In recent years, the quantum oracle model introduced by Aaronson and Kuperberg (2007) has found a lot of use in showing oracle separations between complexity classes and cryptographic primitives. It is generally assumed that proof techniques that do not relativize with respect to quantum oracles will also not relativize with respect to classical oracles. In this note, we show that this is not the case: specifically, we show that there is a quantum oracle problem that is contained in the class QMA, but not in a class we call polyQCPH. The class polyQCPH is equal to PSPACE with respect to classical oracles, and it is a well-known result that QMA is contained in PSPACE (also with respect to classical oracles). We also show that the same separation holds relative to a distributional oracle, which is a model introduced by Natarajan and Nirkhe (2024). We believe our findings show the need for some caution when using these non-standard oracle models, particularly when showing separations between quantum and classical resources.