Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
We consider the problem of computing $\ell$-page queue layouts, which are linear arrangements of vertices accompanied with an assignment of the edges to pages from one to $\ell$ that avoid the nesting of edges on any of the pages. Inspired by previous work in the extension of stack layouts, here we consider the setting of extending a partial $\ell$-page queue layout into a complete one and primarily analyze the problem through the refined lens of parameterized complexity. We obtain novel algorithms and lower bounds which provide a detailed picture of the problem's complexity under various measures of incompleteness, and identify surprising distinctions between queue and stack layouts in the extension setting.
This paper proposes a fast and unsupervised scheme for a polygonal approximation of a closed digital curve. It is demonstrated that the approximation scheme is faster than state-of-the-art approximation and is competitive with the same in Rosin's measure and in its aesthetic aspect. The scheme comprises of three phases: initial segmentation, iterative vertex insertion, and iterative merging, followed by vertex adjustment. The initial segmentation is used to detect sharp turnings - the vertices that seemingly have high curvature. It is likely that some of important vertices with low curvature might have been missed out at the first phase and so iterative vertex insertion is used to add vertices in a region where the curvature changes slowly but steadily. The initial phase may pick up some undesirable vertices and so merging is used to eliminate the redundant vertices. Finally, vertex adjustment is used to facilitate enhancement in the aesthetic look of the approximation. The quality of the approximations is measured using Rosin's measure. The robustness of the proposed scheme with respect to geometric transformation is observed.
This paper addresses the problem of improving the query performance of the triangular expansion algorithm (TEA) for computing visibility regions by finding the most advantageous instance of the triangular mesh, the preprocessing structure. The TEA recursively traverses the mesh while keeping track of the visible region, the set of all points visible from a query point in a polygonal world. We show that the measured query time is approximately proportional to the number of triangle edge expansions during the mesh traversal. We propose a new type of triangular mesh that minimizes the expected number of expansions assuming the query points are drawn from a known probability distribution. We design a heuristic method to approximate the mesh and evaluate the approach on many challenging instances that resemble real-world environments. The proposed mesh improves the mean query times by 12-16% compared to the reference constrained Delaunay triangulation. The approach is suitable to boost offline applications that require computing millions of queries without addressing the preprocessing time. The implementation is publicly available to replicate our experiments and serve the community.
Let $\mathcal{A}$ be the subdivision of $\mathbb{R}^d$ induced by $m$ convex polyhedra having $n$ facets in total. We prove that $\mathcal{A}$ has combinatorial complexity $O(m^{\lceil d/2 \rceil} n^{\lfloor d/2 \rfloor})$ and that this bound is tight. The bound is mentioned several times in the literature, but no proof for arbitrary dimension has been published before.
Many problems in Euclidean geometry, arising in computational design and fabrication, amount to a system of constraints, which is challenging to solve. We suggest a new general approach to the solution, which is to start with analogous problems in isotropic geometry. Isotropic geometry can be viewed as a structure-preserving simplification of Euclidean geometry. The solutions found in the isotropic case give insight and can initialize optimization algorithms to solve the original Euclidean problems. We illustrate this general approach with three examples: quad-mesh mechanisms, composite asymptotic-geodesic gridshells, and asymptotic gridshells with constant node angle.
The shadow of an abstract simplicial complex $K$ with vertices in $\mathbb R^N$ is a subset of $\mathbb R^N$ defined as the union of the convex hulls of simplices of $K$. The Vietoris--Rips complex of a metric space $(S,d)$ at scale $\beta$ is an abstract simplicial complex whose each $k$-simplex corresponds to $(k+1)$ points of $S$ within diameter $\beta$. In case $S\subset\mathbb R^2$ and $d(a,b)=\|a-b\|$ standard Euclidean, the natural shadow projection of the Vietoris--Rips is already proved to be $1$-connected. We extend the result beyond the standard Euclidean distance on $S\subset\mathbb R^N$ to a family of path-based metrics $d^\varepsilon_{S}$. From the pairwise Euclidean distances of points of $S$, we introduce a family (parametrized by $\varepsilon$) of path-based Vietoris--Rips complexes $R^\varepsilon_\beta(S)$ for a scale $\beta>0$. If $S\subset\mathbb R^2$ is Hausdorff-close to a planar Euclidean graph $G$, we provide quantitative bounds on scales $\beta,\varepsilon$ for the shadow projection map of the Vietoris--Rips of $(S,d^\varepsilon_{S})$ at scale $\beta$ to be $1$-connected. As a novel application, this paper first studies the homotopy-type recovery of $G\subset\mathbb R^N$ using the abstract Vietoris--Rips complex of a Hausdorff-close sample $S$ under the $d^\varepsilon_{S}$ metric. Then, our result on the $1$-connectivity of the shadow projection lends itself to providing also a geometrically close embedding for the reconstruction. Based on the length of the shortest loop and large-scale distortion of the embedding of $G$, we quantify the choice of a suitable sample density $\varepsilon$ and a scale $\beta$ at which the shadow of $R^\varepsilon_\beta(S)$ is homotopy-equivalent and Hausdorff-close to $G$.
Recently, the application of quantum computation to topological data analysis (TDA) has received increasing attention. In particular, several quantum algorithms have been proposed for estimating (normalized) Betti numbers, a central challenge in TDA. However, it was recently proven that estimating Betti numbers is an NP-hard problem, revealing a complexity-theoretic limitation to achieving a generic quantum advantage for this task. Motivated by this limitation and inspired by previous progress, we explore broader quantum approaches to TDA. First, we consider scenarios in which a simplicial complex is specified in a more informative form, enabling alternative quantum algorithms to estimate Betti numbers and persistent Betti numbers. We then move beyond Betti numbers and study the problem of testing the homology class of a given cycle, as well as distinguishing between homology classes. We also introduce cohomological techniques for these problems, along with a quantum algorithm. We then discuss their potential use in the testing and tracking of homology classes, which can be useful for TDA applications. Our results show that, despite the hardness of general Betti number estimation, quantum algorithms can still offer speed-ups in structured settings.
We initiate the study of computing diverse triangulations to a given polygon. Given a simple $n$-gon $P$, an integer $ k \geq 2 $, a quality measure $\sigma$ on the set of triangulations of $P$ and a factor $ \alpha \geq 1 $, we formulate the Diverse and Nice Triangulations (DNT) problem that asks to compute $k$ \emph{distinct} triangulations $T_1,\dots,T_k$ of $P$ such that a) their diversity, $\sum_{i < j} d(T_i,T_j) $, is as large as possible \emph{and} b) they are nice, i.e., $\sigma(T_i) \leq \alpha \sigma^* $ for all $1\leq i \leq k$. Here, $d$ denotes the symmetric difference of edge sets of two triangulations, and $\sigma^*$ denotes the best quality of triangulations of $P$, e.g., the minimum Euclidean length. As our main result, we provide a $\mathrm{poly}(n,k)$-time approximation algorithm for the DNT problem that returns a collection of $k$ distinct triangulations whose diversity is at least $1 - \Theta(1/k)$ of the optimal, and each triangulation satisfies the quality constraint. This is accomplished by studying \emph{bi-criteria triangulations} (BCT), which are triangulations that simultaneously optimize two criteria, a topic of independent interest. We complement our approximation algorithms by showing that the DNT problem and the BCT problem are NP-hard. Finally, for the version where diversity is defined as $\min_{i < j} d(T_i,T_j) $, we show a reduction from the problem of computing optimal Hamming codes, and provide an $n^{O(k)}$-time $\tfrac12$-approximation algorithm. Note that this improves over the naive brutef-orce $2^{O(nk)}$ time bound for enumerating all $k$-tuples among the triangulations of a simple $n$-gon, whose total number can be the $(n-2)$-th Catalan number.
Spectral partitioning is a method that can be used to compute small sparse cuts or small edge-separators in a wide variety of graph classes, by computing the second-smallest eigenvalue (and eigenvector) of the Laplacian matrix. Upper bounds on this eigenvalue for certain graph classes imply that the method obtains small edge-separators for these classes, usually with a sub-optimal dependence on the maximum degree. In this work, we show that a related method, called reweighted spectral partitioning, guarantees near-optimal sparse vertex-cuts and vertex-separators in a wide variety of graph classes. In many cases, this involves little-to-no necessary dependence on maximum degree. We also obtain a new proof of the planar separator theorem, a strengthened eigenvalue bound for bounded-genus graphs, and a refined form of the recent Cheeger-style inequality for vertex expansion via a specialized dimension-reduction step.
We consider the problem of finding and enumerating polyominos that can be folded into multiple non-isomorphic boxes. While several computational approaches have been proposed, including SAT, randomized algorithms, and decision diagrams, none has been able to perform at scale. We argue that existing SAT encodings are hindered by the presence of global constraints (e.g., graph connectivity or acyclicity), which are generally hard to encode effectively and hard for solvers to reason about. In this work, we propose a new SAT-based approach that replaces these global constraints with simple local constraints that have substantially better propagation properties. Our approach dramatically improves the scalability of both computing and enumerating common box unfoldings: (i) while previous approaches could only find common unfoldings of two boxes up to area 88, ours easily scales beyond 150, and (ii) while previous approaches were only able to enumerate common unfoldings up to area 30, ours scales up to 60. This allows us to rule out 46, 54, and 58 as the smallest areas allowing a common unfolding of three boxes, thereby refuting a conjecture of Xu et al. (2017).
We study the minimum membership geometric set cover, i.e., MMGSC problem [SoCG, 2023] in the continuous setting. In this problem, the input consists of a set $P$ of $n$ points in $\mathbb{R}^{2}$, and a geometric object $t$, the goal is to find a set $\mathcal{S}$ of translated copies of the geometric object $t$ that covers all the points in $P$ while minimizing $\mathsf{memb}(P, \mathcal{S})$, where $\mathsf{memb}(P, \mathcal{S})=\max_{p\in P}|\{s\in \mathcal{S}: p\in s\}|$. For unit squares, we present a simple $O(n\log n)$ time algorithm that outputs a $1$-membership cover. We show that the size of our solution is at most twice that of an optimal solution. We establish the NP-hardness on the problem of computing the minimum number of non-overlapping unit squares required to cover a given set of points. This algorithm also generalizes to fixed-sized hyperboxes in $d$-dimensional space, where an $1$-membership cover with size at most $2^{d-1}$ times the size of a minimum-sized $1$-membership cover is computed in $O(dn\log n)$ time. Additionally, we characterize a class of objects for which a $1$-membership cover always exists. For unit disks, we prove that a $2$-membership cover exists for any point set, and the size of the cover is at most $7$ times that of the optimal cover. For arbitrary convex polygons with $m$ vertices, we present an algorithm that outputs a $4$-membership cover in $O(n\log n + nm)$ time.
For a graph $G$ spanning a metric space, the dilation of a pair of points is the ratio of their distance in the shortest path graph metric to their distance in the metric space. Given a graph $G$ and a budget $k$, a classic problem is to augment $G$ with $k$ additional edges to reduce the maximum dilation. In this note, we consider a variant of this problem where the goal is to reduce the average dilation for pairs of points in $G$. We provide an $O(k)$ approximation algorithm for this problem, matching the approximation ratio given by prior work for the maximum dilation variant.
We present a computational methodology for obtaining rotationally symmetric sets of points satisfying discrete geometric constraints, and demonstrate its applicability by discovering new solutions to some well-known problems in combinatorial geometry. Our approach takes the usage of SAT solvers in discrete geometry further by directly embedding rotational symmetry into the combinatorial encoding of geometric configurations. Then, to realize concrete point sets corresponding to abstract designs provided by a SAT solver, we introduce a novel local-search realizability solver, which shows excellent practical performance despite the intrinsic $\exists \mathbb{R}$-completeness of the problem. Leveraging this combined approach, we provide symmetric extremal solutions to the Erd\H{o}s-Szekeres problem, as well as a minimal odd-sized solution with 21 points for the everywhere-unbalanced-points problem, improving on the previously known 23-point configuration. The imposed symmetries yield more aesthetically appealing solutions, enhancing human interpretability, and simultaneously offer computational benefits by significantly reducing the number of variables required to encode discrete geometric problems.
The cost and accuracy of simulating complex physical systems using the Finite Element Method (FEM) scales with the resolution of the underlying mesh. Adaptive meshes improve computational efficiency by refining resolution in critical regions, but typically require task-specific heuristics or cumbersome manual design by a human expert. We propose Adaptive Meshing By Expert Reconstruction (AMBER), a supervised learning approach to mesh adaptation. Starting from a coarse mesh, AMBER iteratively predicts the sizing field, i.e., a function mapping from the geometry to the local element size of the target mesh, and uses this prediction to produce a new intermediate mesh using an out-of-the-box mesh generator. This process is enabled through a hierarchical graph neural network, and relies on data augmentation by automatically projecting expert labels onto AMBER-generated data during training. We evaluate AMBER on 2D and 3D datasets, including classical physics problems, mechanical components, and real-world industrial designs with human expert meshes. AMBER generalizes to unseen geometries and consistently outperforms multiple recent baselines, including ones using Graph and Convolutional Neural Networks, and Reinforcement Learning-based approaches.
This paper introduces $k$-Dynamic Time Warping ($k$-DTW), a novel dissimilarity measure for polygonal curves. $k$-DTW has stronger metric properties than Dynamic Time Warping (DTW) and is more robust to outliers than the Fr\'{e}chet distance, which are the two gold standards of dissimilarity measures for polygonal curves. We show interesting properties of $k$-DTW and give an exact algorithm as well as a $(1+\varepsilon)$-approximation algorithm for $k$-DTW by a parametric search for the $k$-th largest matched distance. We prove the first dimension-free learning bounds for curves and further learning theoretic results. $k$-DTW not only admits smaller sample size than DTW for the problem of learning the median of curves, where some factors depending on the curves' complexity $m$ are replaced by $k$, but we also show a surprising separation on the associated Rademacher and Gaussian complexities: $k$-DTW admits strictly smaller bounds than DTW, by a factor $\tilde\Omega(\sqrt{m})$ when $k\ll m$. We complement our theoretical findings with an experimental illustration of the benefits of using $k$-DTW for clustering and nearest neighbor classification.
We present the winning implementation of the Seventh Computational Geometry Challenge (CG:SHOP 2025). The task in this challenge was to find non-obtuse triangulations for given planar regions, respecting a given set of constraints consisting of extra vertices and edges that must be part of the triangulation. The goal was to minimize the number of introduced Steiner points. Our approach is to maintain a constrained Delaunay triangulation, for which we repeatedly remove, relocate, or add Steiner points. We use local search to choose the action that improves the triangulation the most, until the resulting triangulation is non-obtuse.
We present PyRigi, a novel Python package designed to study the rigidity properties of graphs and frameworks. Among many other capabilities, PyRigi can determine whether a graph admits only finitely many ways, up to isometries, of being drawn in the plane once the edge lengths are fixed, whether it has a unique embedding, or whether it satisfied such properties even after the removal of any of its edges. By implementing algorithms from the scientific literature, PyRigi enables the exploration of rigidity properties of structures that would be out of reach for computations by hand. With reliable and robust algorithms, as well as clear, well-documented methods that are closely connected to the underlying mathematical definitions and results, PyRigi aims to be a practical and powerful general-purpose tool for the working mathematician interested in rigidity theory. PyRigi is open source and easy to use, and awaits researchers to benefit from its computational potential.
Geometry is a fundamental branch of mathematics and plays a crucial role in evaluating the reasoning capabilities of multimodal large language models (MLLMs). However, existing multimodal mathematics benchmarks mainly focus on plane geometry and largely ignore solid geometry, which requires spatial reasoning and is more challenging than plane geometry. To address this critical gap, we introduce SolidGeo, the first large-scale benchmark specifically designed to evaluate the performance of MLLMs on mathematical reasoning tasks in solid geometry. SolidGeo consists of 3,113 real-world K-12 and competition-level problems, each paired with visual context and annotated with difficulty levels and fine-grained solid geometry categories. Our benchmark covers a wide range of 3D reasoning subjects such as projection, unfolding, spatial measurement, and spatial vector, offering a rigorous testbed for assessing solid geometry. Through extensive experiments, we observe that MLLMs encounter substantial challenges in solid geometry math tasks, with a considerable performance gap relative to human capabilities on SolidGeo. Moreover, we analyze the performance, inference efficiency and error patterns of various models, offering insights into the solid geometric mathematical reasoning capabilities of MLLMs. We hope SolidGeo serves as a catalyst for advancing MLLMs toward deeper geometric reasoning and spatial intelligence.