Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
We establish a bijective correspondence between Smirnov words with balanced letter multiplicities and Hamiltonian paths in complete $m$-partite graphs $K_{n,n,\ldots,n}$. This bijection allows us to derive closed inclusion-exclusion formulas for the number of Hamiltonian cycles in such graphs. We further extend the enumeration to the generalized nonuniform case $K_{n_1,n_2,\ldots,n_m}$. We also provide an asymptotic analysis based on Stirling's approximation, which yields compact factorial expressions and logarithmic expansions describing the growth of the number of Hamiltonian cycles in the considered graphs. Our approach unifies the combinatorial study of adjacency-constrained words and the enumeration of Hamiltonian cycles within a single analytical framework.
We prove a generalization to Jennrich's uniqueness theorem for tensor decompositions in the undercomplete setting. Our uniqueness theorem is based on an alternative definition of the standard tensor decomposition, which we call matrix-vector decomposition. Moreover, in the same settings in which our uniqueness theorem applies, we also design and analyze an efficient randomized algorithm to compute the unique minimum matrix-vector decomposition (and thus a tensor rank decomposition of minimum rank). As an application of our uniqueness theorem and our efficient algorithm, we show how to compute all matrices of minimum rank (up to scalar multiples) in certain generic vector spaces of matrices.
Canalization is a key organizing principle in complex systems, particularly in gene regulatory networks. It describes how certain input variables exert dominant control over a function's output, thereby imposing hierarchical structure and conferring robustness to perturbations. Degeneracy, in contrast, captures redundancy among input variables and reflects the complete dominance of some variables by others. Both properties influence the stability and dynamics of discrete dynamical systems, yet their combinatorial underpinnings remain incompletely understood. Here, we derive recursive formulas for counting Boolean functions with prescribed numbers of essential variables and given canalizing properties. In particular, we determine the number of non-degenerate canalizing Boolean functions -- that is, functions for which all variables are essential and at least one variable is canalizing. Our approach extends earlier enumeration results on canalizing and nested canalizing functions. It provides a rigorous foundation for quantifying how frequently canalization occurs among random Boolean functions and for assessing its pronounced over-representation in biological network models, where it contributes to both robustness and to the emergence of distinct regulatory roles.
We revisit the Strong Birthday Problem (SBP) introduced in [1]. The problem is stated as follows: what is the minimum number of people we have to choose so that everyone has a shared birthday with probability at least 1/2? We derive recurrence relations to compute the probability, and further show a nice connection to the associated Stirling numbers of the second kind to derive additional recurrences. We implement the recurrences using dynamic programming as well as compute the values using the combinatorial formula, and provide numerical results.
This paper investigates why and when the edge-based districting problem becomes computationally intractable. The overall problem is represented as an exact mathematical programming formulation consisting of an objective function and several constraint groups, each enforcing a well-known districting criterion such as balance, contiguity, or compactness. While districting is known to be NP-hard in general, we study what happens when specific constraint groups are relaxed or removed. The results identify precise boundaries between tractable subproblems (in P) and intractable ones (NP-hard). The paper also discusses implications on node-based analogs of the featured districting problems, and it considers alternative notions of certain criteria in its analysis.
The simple continued fractions for the Golden & Silver means are well-known. It is astonishing that, as far as we know, no one has published half-iterates (let alone quarter-iterates) for the corresponding algorithms. We also examine the cosine and logistic maps (with parameter $2 < \lambda < 3$).
For a subset $X$ of the vertex set $\VV(\GG)$ of a graph $\GG$, we denote the set of edges of $\GG$ which have exactly one end in $X$ by $\partial(X)$ and refer to it as the cut of $X$ or edge cut $\partial(X)$. A graph $\GG=(\VV,\EE)$ is called matching covered if $\forall e \in \EE(\GG), ~\exists \text{a perfect matching }M \text{ of }\GG \text{ s. t. } e \in M$. A cut $C$ of a matching covered graph $\GG$ is a separating cut if and only if, given any edge $e$, there is a perfect matching $M_{e}$ of $\GG$ such that $e \in M_{e}$ and $|C \cap M_{e}| = 1$. A cut $C$ in a matching covered graph $\GG$ is a tight cut of $\GG$ if $|C \cap M| = 1$ for every perfect matching $M$ of $\GG$. For, $X, Y \subseteq \VV(\GG)$, we denote the set of edges of $\EE(\GG)$ which have one endpoint in $X$ and the other endpoint in $Y$ by $E[X,Y]$. Let $\partial(X)=E[X,\overline{X}]$ be an edge cut, where $\overline{X}=\VV(\GG) \setminus X$. An edge cut is trivial if $|X|=1$ or $|\overline{X}|=1$. A matching covered graph, which is free of nontrivial tight cuts, is a brace if it is bipartite and is a brick if it is non-bipartite. An edge $e$ in a brace $\GG$ is \emph{thin} if, for every tight cut $\partial(X)$ of $\GG - e$, $|X| \le 3$ or $|\overline{X}| \le 3$. Carvalho, Lucchesi and Murty conjectured that there exists a positive constant $c$ such that every brace $\GG$ has $c|\VV(\GG)|$ thin edges \cite{DBLP:journals/combinatorics/LucchesiCM15}. He and Lu \cite{HE2025153} showed a lower bound of thin edges in a brace in terms of the number of cubic vertices. We asked whether any planar brace exists that does not contain any cubic vertices. We answer negatively by showing that such set of planar braces is empty. We have been able to show a quantitively tight lower bound on the number of cubic vertices in a planar brace. We have proved tight upper bounds of nonthin edges and thin edges in a planar brace.
For a sequence of tasks, each with a positive integer period, the pinwheel scheduling problem involves finding a valid schedule in the sense that the schedule performs one task per day and each task is performed at least once every consecutive days of its period. It had been conjectured by Chan and Chin in 1993 that there exists a valid schedule for any sequence of tasks with density, the sum of the reciprocals of each period, at most $\frac{5}{6}$. Recently, Kawamura settled this conjecture affirmatively. In this paper we consider an extended version with real periods proposed by Kawamura, in which a valid schedule must perform each task $i$ having a real period~$a_{i}$ at least $l$ times in any consecutive $\lceil l a_{i} \rceil$ days for all positive integer $l$. We show that any sequence of tasks such that the periods take three distinct real values and the density is at most $\frac{5}{6}$ admits a valid schedule. We hereby conjecture that the conjecture of Chan and Chin is true also for real periods.
We study a robust extensible bin packing problem with budgeted uncertainty, under a budgeted uncertainty model where item sizes are defined to lie in the intersection of a box with a one-norm ball. We propose a scenario generation algorithm for this problem, which alternates between solving a master robust bin-packing problem with a finite uncertainty set and solving a separation problem. We first show that the separation is strongly NP-hard given solutions to the continuous relaxation of the master problem. Then, focusing on the separation problem for the integer master problem, we show that this problem becomes a special case of the continuous convex knapsack problem, which is known to be weakly NP-hard. Next, we prove that our special case when each of the functions is piecewise linear, having only two pieces, remains NP-hard. We develop a pseudo-polynomial dynamic program (DP) and a fully polynomial-time approximation scheme (FPTAS) for our special case whose running times match those of a binary knapsack FPTAS. Finally, our computational study shows that the DP can be significantly more efficient in practice compared with solving the problem with specially ordered set (SOS) constraints using advanced mixed-integer (MIP) solvers. Our experiments also demonstrate the application of our separation problem method to solving the robust extensible bin packing problem, including the evaluation of deferring the exact solution of the master problem, separating based on approximate master solutions in intermediate iterations. Finally, a case-study, based on real elective surgery data, demonstrates the potential advantage of our model compared with the actual schedule and optimal nominal schedules.
We investigate a combinatorial reconfiguration problem on oriented graphs, where a reconfiguration step (edge-flip) is the inversion of the orientation of a single edge. A recently published conjecture that is relevant to the correctness of a Markov Chain Monte Carlo sampler for directed flag complexes states that any simple oriented graph admits a flip sequence that monotonically decreases the number of directed 3-cycles to zero, and is known to be true for complete oriented graphs. We show that, in general, this conjecture does not hold. As main tool for disproving the conjecture, we introduce the concept of FBD-graphs (fully blocked digraphs). An FBD-graph is a directed graph that does not contain any directed 1-, 2-, or 3-cycles, and for which any edge-flip creates a directed 3-cycle. We prove that the non-existence of FBD-graphs is a necessary condition for the conjecture to hold and succeed in constructing FBD-graphs. On the other hand, we show that complete graphs, as well as graphs in which every edge is incident to at most two triangles, cannot be fully blocked. In addition to being relevant for determining in which cases the above mentioned sampling process is correct, the concept of FBD-graphs might also be useful for other problems and yields interesting questions for further study.
Although the analysis of loops is not so much because of the complications, it has already been found that heuristically enhancing loops decreases the variance of degree distributions for improving the robustness of connectivity. While many real scale-free networks are known to contain shorter loops such as triangles, it remains to investigate the distributions of longer loops in more wide class of networks. We find a relation between narrower degree distributions and longer loops in investigating the lengths of the shortest loops in various networks with continuously changing degree distributions, including three typical types of realistic scale-free networks, classical Erd\"os-R\'enyi random graphs, and regular networks. In particular, we show that narrower degree distributions contain longer shortest loops, as a universal property in a wide class of random networks. We suggest that the robustness of connectivity is enhanced by constructing long loops of O(log N).
We present the first known pivot Gray code for spanning trees of complete graphs, listing all spanning trees such that consecutive trees differ by pivoting a single edge around a vertex. This pivot Gray code thus addresses an open problem posed by Knuth in The Art of Computer Programming, Volume 4 (Exercise 101, Section 7.2.1.6, [Knuth, 2011]), rated at a difficulty level of 46 out of 50, and imposes stricter conditions than existing revolving-door or edge-exchange Gray codes for spanning trees of complete graphs. Our recursive algorithm generates each spanning tree in constant amortized time using $O(n^2)$ space. In addition, we provide a novel proof of Cayley's formula, $n^{n-2}$, for the number of spanning trees in a complete graph, derived from our recursive approach. We extend the algorithm to generate edge-exchange Gray codes for general graphs with $n$ vertices, achieving $O(n^2)$ time per tree using $O(n^2)$ space. For specific graph classes, the algorithm can be optimized to generate edge-exchange Gray codes for spanning trees in constant amortized time per tree for complete bipartite graphs, $O(n)$-amortized time per tree for fan graphs, and $O(n)$-amortized time per tree for wheel graphs, all using $O(n^2)$ space.
This paper reports the results of numerical computations for determining the number of polyominoes of size n (n-ominoes). We verify the existing counts for n <= 50 and newly compute the total number of polyominoes up to n <= 59, extending the counting limit. This work shows that, in addition to optimizing the search algorithm for the polyomino counting problem, multi-threading dramatically improves computational efficiency.
We introduce and study a new four-peg variant of the Tower of Hanoi problem under parity constraints. Two pegs are neutral and allow arbitrary disc placements, while the remaining two pegs are restricted to discs of a prescribed parity: one for even-labelled discs and the other for odd-labelled discs. Within this constrained setting, we investigate four canonical optimization objectives according to distinct target configurations and derive for each the exact number of moves required to optimally transfer the discs. We establish a coupled system of recursive relations governing the four optimal move functions and unfold them into higher-order recurrences and explicit closed forms. These formulas exhibit periodic growth patterns and reveal that all solutions grow exponentially, but at a significantly slower rate than the classical three-peg case. In particular, each optimal move sequence has an a half-exponential-like asymptotic order induced by the parity restriction. In addition, we define the associated parity-constrained Hanoi graph, whose vertices represent all feasible states and whose edges represent legal moves. We determine its order, degrees, connectivity, planarity, diameter, Hamiltonicity, clique number, and chromatic properties, and show that it lies strictly between the classical three- and four-peg Hanoi graphs via the inclusion relation.
Counting non-isomorphic tree-like multigraphs that include self-loops and multiple edges is an important problem in combinatorial enumeration, with applications in chemical graph theory, polymer science, and network modeling. Traditional counting techniques, such as Polya's theorem and branching algorithms, often face limitations due to symmetry handling and computational complexity. This study presents a unified dynamic programming framework for enumerating tree-like graphs characterized by a fixed number of vertices, self-loops, and multiple edges. The proposed method utilizes canonical rooted representations and recursive decomposition of subgraphs to eliminate redundant configurations, ensuring exact counting without the need for explicit structure generation. The framework also provides analytical bounds and recurrence relations that describe the growth behaviour of such multigraphs. This work extends previous models that treated self-loops and multiple edges separately, offering a general theoretical foundation for the enumeration of complex tree-like multigraphs in both mathematical and chemical domains.
A longstanding open question in algorithm design is whether "combinatorial" matrix multiplication algorithms -- avoiding Strassen-like divide-and-conquer -- can achieve truly subcubic runtime $n^{3-\delta}$. We present an $O(n^{2.89})$-time exact algorithm, which only sums convolutions in $\mathbb{Z}_m^k$ (multivariate polynomial multiplications) via FFT, building on the work of Cohn, Kleinberg, Szegedy and Umans (CKSU'05). While the algorithm avoids recursion, the asymptotic speedup arises only for impractically large matrices. Motivated by practical applications, we use this baseline to develop a new framework for fast approximate matrix multiplication (AMM), via low-degree approximations of the CKSU polynomials. We show that combining the aforementioned algorithm with black-box linear sketching already breaks the longstanding linear speed-accuracy tradeoff for AMM (Sarlos'06, Clarkson-Woodruff'13 ,Pagh'11, Cohn-Lewis'00), achieving $\frac{1}{r^{1.1}}\|\mathbf{A}\|_F^2\|\mathbf{B}\|_F^2$ error in $O(rn^2)$-time. Our main result is a low-degree approximation scheme for the CKSU polynomials, based on a Fourier-concentration lemma, yielding substantially smaller error in the distributional setting where $\mathbf{A},\mathbf{B}$ come from an i.i.d product-distribution; For random Gaussian matrices, this practical AMM algorithm attains smaller error than the best rank-$r$ SVD of the output matrix $\mathbf{A}\mathbf{B}$, in time $O(rn^2)$. This is a substantial improvement over iterative Krylov subspace methods for low-rank approximation. Our theoretical and empirical results suggest the possibility of replacing MatMuls with sums of convolutions in LLM training and inference.
We show that every unweighted string graph $G$ has an $O(1)$-distortion planar emulator: that is, there exists an (edge-weighted) planar graph $H$ with $V(H) = V(G)$, such that every pair of vertices $(u,v)$ satisfies $\delta_G(u,v) \le \delta_H(u,v) \le O(1) \cdot \delta_G(u,v).$
The model of relative-error property testing of Boolean functions has been the subject of significant recent research effort [CDH+24][CPPS25a][CPPS25b] In this paper we consider the problem of relative-error testing an unknown and arbitrary $f: \{0,1\}^n \to \{0,1\}$ for the property of being a unate function, i.e. a function that is either monotone non-increasing or monotone non-decreasing in each of the $n$ input variables. Our first result is a one-sided non-adaptive algorithm for this problem that makes $\tilde{O}(\log(N)/\epsilon)$ samples and queries, where $N=|f^{-1}(1)|$ is the number of satisfying assignments of the function that is being tested and the value of $N$ is given as an input parameter to the algorithm. Building on this algorithm, we next give a one-sided adaptive algorithm for this problem that does not need to be given the value of $N$ and with high probability makes $\tilde{O}(\log(N)/\epsilon)$ samples and queries. We also give lower bounds for both adaptive and non-adaptive two-sided algorithms that are given the value of $N$ up to a constant multiplicative factor. In the non-adaptive case, our lower bounds essentially match the complexity of the algorithm that we provide.