Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
We show that linearly constrained linear optimization over a Stiefel or Grassmann manifold is NP-hard in general. We show that the same is true for unconstrained quadratic optimization over a Stiefel manifold. We will establish the nonexistence of FPTAS for these optimization problems over a Stiefel manifold. As an aside we extend our results to flag manifolds. Combined with earlier findings, this shows that manifold optimization is a difficult endeavor -- even the simplest problems like LP and unconstrained QP are already NP-hard on the most common manifolds.
Membrane systems represent a computational model that operates in a distributed and parallel manner, inspired by the behavior of biological cells. These systems feature objects that transform within a nested membrane structure. This research concentrates on a specific type of these systems, based on cellular symport/antiport communication of chemicals. Results in the literature show that systems of this type that also allow cell division can solve PSPACE problems. In our study, we investigate systems that use membrane separation instead of cell division, for which only limited results are available. Notably, it has been shown that any problem solvable by such systems in polynomial time falls within the complexity class P^(#P). By implementing a system solving MIDSAT, a P^(#P)-complete problem, we demonstrate that the reverse inclusion is true as well, thus providing an exact characterization of the problem class solvable by P systems with symport/antiport and membrane separation. Moreover, our implementation uses rules of length at most three. With this limit, systems were known to be able to solve NP-complete problems, whereas limiting the rules by length two, they characterize P.
The existence of one-way functions (OWFs) forms the minimal assumption in classical cryptography. However, this is not necessarily the case in quantum cryptography. One-way puzzles (OWPuzzs), introduced by Khurana and Tomer, provide a natural quantum analogue of OWFs. The existence of OWPuzzs implies $PP\neq BQP$, while the converse remains open. In classical cryptography, the analogous problem-whether OWFs can be constructed from $P \neq NP$-has long been studied from the viewpoint of hardness of learning. Hardness of learning in various frameworks (including PAC learning) has been connected to OWFs or to $P \neq NP$. In contrast, no such characterization previously existed for OWPuzzs. In this paper, we establish the first complete characterization of OWPuzzs based on the hardness of a well-studied learning model: distribution learning. Specifically, we prove that OWPuzzs exist if and only if proper quantum distribution learning is hard on average. A natural question that follows is whether the worst-case hardness of proper quantum distribution learning can be derived from $PP \neq BQP$. If so, and a worst-case to average-case hardness reduction is achieved, it would imply OWPuzzs solely from $PP \neq BQP$. However, we show that this would be extremely difficult: if worst-case hardness is PP-hard (in a black-box reduction), then $SampBQP \neq SampBPP$ follows from the infiniteness of the polynomial hierarchy. Despite that, we show that $PP \neq BQP$ is equivalent to another standard notion of hardness of learning: agnostic. We prove that $PP \neq BQP$ if and only if agnostic quantum distribution learning with respect to KL divergence is hard. As a byproduct, we show that hardness of agnostic quantum distribution learning with respect to statistical distance against $PPT^{\Sigma_3^P}$ learners implies $SampBQP \neq SampBPP$.
The Reconfiguration Inapproximability Hypothesis (RIH), recently established by Hirahara-Ohsaka (STOC'24) and Karthik-Manurangsi (ECCC'24), studies the hardness of reconfiguring one solution into another in constraint satisfaction problems (CSP) when restricted to approximate intermediate solutions. In this work, we make a tighter connection between RIH's soundness gap and that of probabilistically checkable proofs of proximity (PCPP). Consequently, we achieve an improved trade-off between soundness and query complexity in Gap CSP Reconfiguration. Our approach leverages a parallelization framework, which also appears in some recent parameterized inapproximability results.
We study the minimum \emph{Monitoring Edge Geodetic Set} (\megset) problem introduced in [Foucaud et al., CALDAM'23]: given a graph $G$, we say that an edge is monitored by a pair $u,v$ of vertices if \emph{all} shortest paths between $u$ and $v$ traverse $e$; the goal of the problem consists in finding a subset $M$ of vertices of $G$ such that each edge of $G$ is monitored by at least one pair of vertices in $M$, and $|M|$ is minimized. In this paper, we prove that all polynomial-time approximation algorithms for the minimum \megset problem must have an approximation ratio of $\Omega(\log n)$, unless \p = \np. To the best of our knowledge, this is the first non-constant inapproximability result known for this problem. We also strengthen the known \np-hardness of the problem on $2$-apex graphs by showing that the same result holds for $1$-apex graphs. This leaves open the problem of determining whether the problem remains \np-hard on planar (i.e., $0$-apex) graphs. On the positive side, we design an algorithm that computes good approximate solutions for hereditary graph classes that admit efficiently computable balanced separators of truly sublinear size. This immediately results in polynomial-time approximation algorithms achieving an approximation ratio of $O(n^{\frac{1}{4}} \sqrt{\log n})$ on planar graphs, graphs with bounded genus, and $k$-apex graphs with $k=O(n^{\frac{1}{4}})$. On graphs with bounded treewidth, we obtain an approximation ratio of $O(\log^{3/2} n)$ for any constant $\varepsilon > 0$. This compares favorably with the best-known approximation algorithm for general graphs, which achieves an approximation ratio of $O(\sqrt{n \log n})$ via a simple reduction to the \textsc{Set Cover} problem.
We prove that Hamiltonian Path and Hamiltonian Cycle are NP-hard on graphs of linear mim-width 26, even when a linear order of the input graph with mim-width 26 is provided together with input. This fills a gap left by a broken proof of the para-NP-hardness of Hamiltonicity problems parameterized by mim-width.
The synthesis of quantum operators involves decomposing general quantum gates into the gate set supported by a given quantum device. Multi-controlled gates are essential components in this process. In this work, we present improved decompositions of multi-controlled NOT gates with logarithmic depth using a single ancilla qubit, while also reducing the constant factors in the circuit depth compared to previous work. We optimize a previously proposed decomposition of multi-target, multi-controlled special unitary SU(2) gates by identifying the presence of a conditionally clean qubit. Additionally, we introduce the best-known decomposition of multi-controlled approximate unitary U(2) gates without using ancilla qubits. This approach significantly reduces the overall circuit depth and CNOT count while preserving an adjustable error parameter, yielding a more efficient and scalable solution for synthesizing large controlled-unitary gates. Our method is particularly suitable for both NISQ and fault-tolerant quantum architectures. All software developed in this project is freely available.
In a connected simple graph G = (V(G),E(G)), each vertex is assigned a color from the set of colors C={1, 2,..., c}. The set of vertices V(G) is partitioned as V_1, V_2, ... ,V_c, where all vertices in V_j share the same color j. A subset S of V(G) is called Selective Subset if, for every vertex v in V(G), and if v is in V_j, at least one of its nearest neighbors in (S union (V(G)\ V_j)) has the same color as v. The Minimum Selective Subset (MSS) problem seeks to find a selective subset of minimum size. The problem was first introduced by Wilfong in 1991 for a set of points in the Euclidean plane, where two major problems, MCS (Minimum Consistent Subset) and MSS, were proposed. In graph algorithms, the only known result is that the MSS problem is NP-complete, as shown in 2018. Beyond this, no further progress has been made to date. In contrast, the MCS problem has been widely studied in various graph classes over the years. Therefore, in this work, we also extend the algorithmic study of MSS on various graph classes. We first present a log(n)-approximation algorithm for general graphs with n vertices and regardless of the number of colors. We also show that the problem remains NP-complete in planar graphs when restricted to just two colors.. Finally, we provide polynomial-time algorithms for computing optimal solutions in trees and unit interval graphs for any number of colors.
In this paper, we study the query complexity of Boolean functions in the presence of uncertainty, motivated by parallel computation with an unlimited number of processors where inputs are allowed to be unknown. We allow each query to produce three results: zero, one, or unknown. The output could also be: zero, one, or unknown, with the constraint that we should output ''unknown'' only when we cannot determine the answer from the revealed input bits. Such an extension of a Boolean function is called its hazard-free extension. - We prove an analogue of Huang's celebrated sensitivity theorem [Annals of Mathematics, 2019] in our model of query complexity with uncertainty. - We show that the deterministic query complexity of the hazard-free extension of a Boolean function is at most quadratic in its randomized query complexity and quartic in its quantum query complexity, improving upon the best-known bounds in the Boolean world. - We exhibit an exponential gap between the smallest depth (size) of decision trees computing a Boolean function, and those computing its hazard-free extension. - We present general methods to convert decision trees for Boolean functions to those for their hazard-free counterparts, and show optimality of this construction. We also parameterize this result by the maximum number of unknown values in the input. - We show lower bounds on size complexity of decision trees for hazard-free extensions of Boolean functions in terms of the number of prime implicants and prime implicates of the underlying Boolean function.
In a large-scale network, we want to choose some influential nodes to make a profit by paying some cost within a limited budget so that we do not have to spend more budget on some nodes adjacent to the chosen nodes; our problem is the graph-theoretic representation of it. We define our problem Dominating Set Knapsack by attaching Knapsack Problem with Dominating Set on graphs. Each vertex is associated with a cost factor and a profit amount. We aim to choose some vertices within a fixed budget that gives maximum profit so that we do not need to choose their 1-hop neighbors. We show that the Dominating Set Knapsack problem is strongly NP-complete even when restricted to Bipartite graphs but weakly NP-complete for Star graphs. We present a pseudo-polynomial time algorithm for Trees in time $O(n\cdot min\{s^2, (\alpha(V))^2\})$. We show that Dominating Set Knapsack is very unlikely to be Fixed Parameter Tractable(FPT) by proving that it is in W[2]-hard parameterized by the solution size. We developed FPT algorithms with running time $O(4^{tw}\cdot n^{O(1)} \cdot min\{s^2,{\alpha(V)}^2\})$ and $O(2^{vck-1}\cdot n^{O(1)} \cdot min\{s^2,{\alpha(V)}^2\})$, where $tw$ represents the treewidth of the given graph, $vck$ is the solution size of the Vertex Cover Knapsack, $s$ is the size of the knapsack and $\alpha(V)=\sum_{v\in V}\alpha(v)$.
Parameterized local search combines classic local search heuristics with the paradigm of parameterized algorithmics. While most local search algorithms aim to improve given solutions by performing one single operation on a given solution, the parameterized approach aims to improve a solution by performing $k$ simultaneous operations. Herein, $k$ is a parameter called search radius for which the value can be chosen by a user. One major goal in the field of parameterized local search is to outline the trade-off between the size of $k$ and the running time of the local search step. In this work, we introduce an abstract framework that generalizes natural parameterized local search approaches for a large class of partitioning problems: Given $n$ items that are partitioned into $b$ bins and a target function that evaluates the quality of the current partition, one asks whether it is possible to improve the solution by removing up to $k$ items from their current bins and reassigning them to other bins. Among others, our framework applies for the local search versions of problems like Cluster Editing, Vector Bin Packing, and Nash Social Welfare. Motivated by a real-world application of the problem Vector Bin Packing, we introduce a parameter called number of types $\tau \le n$ and show that all problems fitting in our framework can be solved in $\tau^k 2^{O(k)} |I|^{O(1)}$ time, where $|I|$ denotes the total input size. In case of Cluster Editing, the parameter $\tau$ generalizes the well-known parameter neighborhood diversity of the input graph. We complement this by showing that for all considered problems, an algorithm significantly improving over our algorithm with running time $\tau^k 2^{O(k)} |I|^{O(1)}$ would contradict the ETH. Additionally, we show that even on very restricted instances, all considered problems are W[1]-hard when parameterized by the search radius $k$ alone.
We prove that Boolean matrices with bounded $\gamma_2$-norm or bounded normalized trace norm must contain a linear-sized all-ones or all-zeros submatrix, verifying a conjecture of Hambardzumyan, Hatami, and Hatami. We also present further structural results about Boolean matrices of bounded $\gamma_2$-norm and discuss applications in communication complexity, operator theory, spectral graph theory, and extremal combinatorics. As a key application, we establish an inverse theorem for MaxCut. A celebrated result of Edwards states that every graph $G$ with $m$ edges has a cut of size at least $\frac{m}{2}+\frac{\sqrt{8m+1}-1}{8}$, with equality achieved by complete graphs with an odd number of vertices. To contrast this, we prove that if the MaxCut of $G$ is at most $\frac{m}{2}+O(\sqrt{m})$, then $G$ must contain a clique of size $\Omega(\sqrt{m})$.
Specialized computational units that perform small matrix multiplications as primitive operations are typically present in modern accelerators. However, these units are often underutilized for many fundamental operations besides dense matrix multiplications. The analysis of algorithms for such architectures is currently stagnated due to the lack of a rigorous theoretical model of computation that captures their characteristics. In this work, we propose MMV-RAM, a computational model tailored to matrix multiplication accelerators. MMV-RAM judiciously extends the Vector-RAM model with an additional processing unit that multiplies two matrices of sizes $n\times s$ and $s\times s$ in a single parallel step, where $s$ is a model parameter. We provide a detailed theoretical analysis of the model, and carefully balance the computational power between the matrix and vector units, guided by the circuit complexity lower bound that parity is not in AC[0]. In MMV-RAM, we study algorithms for segmented scan and sum, two fundamental parallel primitives. We propose a segmented scan algorithm that uses matrix multiplications to perform speculative block-scan computations, which runs in $O(\log_s(n))$ steps. In contrast, we show that any algorithm that uses only the vector unit of MMV-RAM requires $\Omega\left(\frac{\log_2(n)}{\log_2\log_2(n)}\right)$ steps. We further apply these techniques to obtain similar theoretical speedups for element-wise vector multiplication and matrix multiplication. Beyond the worst-case complexity analysis, we propose algorithms for segmented operations that could lead to highly efficient and pragmatic implementations. For example, we observe that segmented sum is a combination of three elementary parallel primitives: scan, compress, and vector differentiation. As a case study, we implement...
We consider the problem of finding a Hamiltonian path or a Hamiltonian cycle with precedence constraints in the form of a partial order on the vertex set. We show that the path problem is $\mathsf{NP}$-complete for graphs of pathwidth 4 while the cycle problem is $\mathsf{NP}$-complete on graphs of pathwidth 5. We complement these results by giving polynomial-time algorithms for graphs of pathwidth 3 and treewidth 2 for Hamiltonian paths as well as pathwidth 4 and treewidth 3 for Hamiltonian cycles. Furthermore, we study the complexity of the path and cycle problems on rectangular grid graphs of bounded height. For these, we show that the path and cycle problems are $\mathsf{NP}$-complete when the height of the grid is greater or equal to 7 and 9, respectively. In the variant where we look for minimum edge-weighted Hamiltonian paths and cycles, the problems are $\mathsf{NP}$-hard for heights 5 and 6, respectively.
In this paper, we provide a uniform framework for investigating small circuit classes and bounds through the lens of ordinary differential equations (ODEs). Following an approach recently introduced to capture the class of polynomial-time computable functions via ODE-based recursion schemas and later applied to the context of functions computed by unbounded fan-in circuits of constant depth (FAC^0), we study multiple relevant small circuit classes. In particular, we show that natural restrictions on linearity and derivation along functions with specific growth rate correspond to kinds of functions that can be proved to be in various classes, ranging from FAC^0 to FAC^1. This reveals an intriguing link between constraints over linear-length ODEs and circuit computation, providing new tools to tackle the complex challenge of establishing bounds for classes in the circuit hierarchies and possibly enhancing our understanding of the role of counters in this setting. Additionally, we establish several completeness results, in particular obtaining the first ODE-based characterizations for the classes of functions computable in constant depth with unbounded fan-in and Mod 2 gates (FACC[2]) and in logarithmic depth with bounded fan-in Boolean gates (FNC1).
Given a graph $G$ and integers $k, x \geq 0$, the Critical Node Cut problem asks whether it is possible to delete at most $k$ vertices from $G$ such that the number of remaining pairs of connected vertices is at most $x$. This problem generalizes Vertex Cover (when $x = 0$), and has applications in network design, epidemiology, and social network analysis. We investigate the parameterized complexity of Critical Node Cut under various structural parameters. We first significantly strengthen existing hardness results by proving W[1]-hardness even when parameterized by the combined parameter $k + \mathrm{fes} + \Delta + \mathrm{pw}$, where $\mathrm{fes}$ is the feedback edge set number, $\Delta$ the maximum degree, and $\mathrm{pw}$ the pathwidth of the input graph. We then identify three structural parameters--max-leaf number, vertex integrity, and modular-width--that render the problem fixed-parameter tractable. Furthermore, leveraging a technique introduced by Lampis [ICALP '14], we develop an FPT approximation scheme that, for any $\varepsilon > 0$, computes a $(1+\varepsilon)$-approximate solution in time $(\mathrm{tw} / \varepsilon)^{\mathcal{O}(\mathrm{tw})} n^{\mathcal{O}(1)}$, where $\mathrm{tw}$ denotes the treewidth of the input graph. Finally, we show that Critical Node Cut does not admit a polynomial kernel when parameterized by vertex cover number, unless standard complexity assumptions fail. Overall, our results significantly sharpen the known complexity landscape of Critical Node Cut.
We show that the GCD of two univariate polynomials can be computed by (piece-wise) algebraic circuits of constant depth and polynomial size over any sufficiently large field, regardless of the characteristic. This extends a recent result of Andrews & Wigderson who showed such an upper bound over fields of zero or large characteristic. Our proofs are based on a recent work of Bhattacharjee, Kumar, Rai, Ramanathan, Saptharishi \& Saraf that shows closure of constant depth algebraic circuits under factorization. On our way to the proof, we show that any $n$-variate symmetric polynomial $P$ that has a small constant depth algebraic circuit can be written as the composition of a small constant depth algebraic circuit with elementary symmetric polynomials. This statement is a constant depth version of a result of Bl\"{a}ser & Jindal, who showed this for algebraic circuits of unbounded depth. As an application of our techniques, we also strengthen the closure results for factors of constant-depth circuits in the work of Bhattacharjee et al. over fields for small characteristic.
We show that algebraic formulas and constant-depth circuits are closed under taking factors. In other words, we show that if a multivariate polynomial over a field of characteristic zero has a small constant-depth circuit or formula, then all its factors can be computed by small constant-depth circuits or formulas respectively. Our result turns out to be an elementary consequence of a fundamental and surprising result of Furstenberg from the 1960s, which gives a non-iterative description of the power series roots of a bivariate polynomial. Combined with standard structural ideas in algebraic complexity, we observe that this theorem yields the desired closure results. As applications, we get alternative (and perhaps simpler) proofs of various known results and strengthen the quantitative bounds in some of them. This includes a unified proof of known closure results for algebraic models (circuits, branching programs and VNP), an extension of the analysis of the Kabanets-Impagliazzo hitting set generator to formulas and constant-depth circuits, and a (significantly) simpler proof of correctness as well as stronger guarantees on the output in the subexponential time deterministic algorithm for factorization of constant-depth circuits from a recent work of Bhattacharjee, Kumar, Ramanathan, Saptharishi & Saraf.