Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Efficiently solving the Shortest Vector Problem (SVP) in two-dimensional lattices holds practical significance in cryptography and computational geometry. While simpler than its high-dimensional counterpart, two-dimensional SVP motivates scalable solutions for high-dimensional lattices and benefits applications like sequence cipher cryptanalysis involving large integers. In this work, we first propose a novel definition of reduced bases and develop an efficient adaptive lattice reduction algorithm \textbf{CrossEuc} that strategically applies the Euclidean algorithm across dimensions. Building on this framework, we introduce \textbf{HVec}, a vectorized generalization of the Half-GCD algorithm originally defined for integers, which can efficiently halve the bit-length of two vectors and may have independent interest. By iteratively invoking \textbf{HVec}, our optimized algorithm \textbf{HVecSBP} achieves a reduced basis in $O(\log n M(n) )$ time for arbitrary input bases with bit-length $n$, where $M(n)$ denotes the cost of multiplying two $n$-bit integers. Compared to existing algorithms, our design is applicable to general forms of input lattices, eliminating the cost of pre-converting input bases to Hermite Normal Form (HNF). The comprehensive experimental results demonstrate that for the input lattice bases in HNF, the optimized algorithm \textbf{HVecSBP} achieves at least a $13.5\times$ efficiency improvement compared to existing methods. For general-form input lattice bases, converting them to HNF before applying \textbf{HVecSBP} offers only marginal advantages in extreme cases where the two basis vectors are nearly degenerate. However, as the linear dependency between input basis vectors decreases, directly employing \textbf{HVecSBP} yields increasingly significant efficiency gains, outperforming hybrid approaches that rely on prior \textbf{HNF} conversion.
We present polynomial-time approximation schemes based on local search} technique for both geometric (discrete) independent set (\mdis) and geometric (discrete) dominating set (\mdds) problems, where the objects are arbitrary radii disks and arbitrary side length axis-parallel squares. Further, we show that the \mdds~problem is \apx-hard for various shapes in the plane. Finally, we prove that both \mdis~and \mdds~problems are \np-hard for unit disks intersecting a horizontal line and axis-parallel unit squares intersecting a straight line with slope $-1$.
Efficient and accurate evaluation of containment queries for regions bound by trimmed NURBS surfaces is important in many graphics and engineering applications. However, the algebraic complexity of surface-surface intersections makes gaps and overlaps between surfaces difficult to avoid for in-the-wild surface models. By considering this problem through the lens of the generalized winding number (GWN), a mathematical construction that is indifferent to the arrangement of surfaces in the shape, we can define a containment query that is robust to model watertightness. Applying contemporary techniques for the 3D GWN on arbitrary curved surfaces would require some form of geometric discretization, potentially inducing containment misclassifications near boundary components. In contrast, our proposed method computes an accurate GWN directly on the curved geometry of the input model. We accomplish this using a novel reformulation of the relevant surface integral using Stokes' theorem, which in turn permits an efficient adaptive quadrature calculation on the boundary and trimming curves of the model. While this is sufficient for "far-field" query points that are distant from the surface, we augment this approach for "near-field" query points (i.e., within a bounding box) and even those coincident to the surface patches via a strategy that directly identifies and accounts for the jump discontinuity in the scalar field. We demonstrate that our method of evaluating the GWN field is robust to complex trimming geometry in a CAD model, and is accurate up to arbitrary precision at arbitrary distances from the surface. Furthermore, the derived containment query is robust to non-watertightness while respecting all curved features of the input shape.
We revisit extending the Kolmogorov-Smirnov distance between probability distributions to the multidimensional setting and make new arguments about the proper way to approach this generalization. Our proposed formulation maximizes the difference over orthogonal dominating rectangular ranges (d-sided rectangles in R^d), and is an integral probability metric. We also prove that the distance between a distribution and a sample from the distribution converges to 0 as the sample size grows, and bound this rate. Moreover, we show that one can, up to this same approximation error, compute the distance efficiently in 4 or fewer dimensions; specifically the runtime is near-linear in the size of the sample needed for that error. With this, we derive a delta-precision two-sample hypothesis test using this distance. Finally, we show these metric and approximation properties do not hold for other popular variants.
Vineyards are a common way to study persistence diagrams of a data set which is changing, as strong stability means that it is possible to pair points in ``nearby'' persistence diagrams, yielding a family of point sets which connect into curves when stacked. Recent work has also studied monodromy in the persistent homology transform, demonstrating some interesting connections between an input shape and monodromy in the persistent homology transform for 0-dimensional homology embedded in $\mathbb{R}^2$. In this work, we re-characterize monodromy in terms of periodicity of the associated vineyard of persistence diagrams. We construct a family of objects in any dimension which have non-trivial monodromy for $l$-persistence of any periodicity and for any $l$. More generally we prove that any knot or link can appear as a vineyard for a shape in $\mathbb{R}^d$, with $d\geq 3$. This shows an intriguing and, to the best of our knowledge, previously unknown connection between knots and persistence vineyards. In particular this shows that vineyards are topologically as rich as one could possibly hope.
The problem of finding a path between two points while avoiding obstacles is critical in robotic path planning. We focus on the feasibility problem: determining whether such a path exists. We model the robot as a query-specific rectangular object capable of moving parallel to its sides. The obstacles are axis-aligned, rectangular, and may overlap. Most previous works only consider nondisjoint rectangular objects and point-sized or statically sized robots. Our approach introduces a novel technique leveraging generalized Gabriel graphs and constructs a data structure to facilitate online queries regarding path feasibility with varying robot sizes in sublinear time. To efficiently handle feasibility queries, we propose an online algorithm utilizing sweep line to construct a generalized Gabriel graph under the $L_\infty$ norm, capturing key gap constraints between obstacles. We utilize a persistent disjoint-set union data structure to efficiently determine feasibility queries in $\mathcal{O}(\log n)$ time and $\mathcal{O}(n)$ total space.
In this work, we leverage GPUs to construct probabilistically collision-free convex sets in robot configuration space on the fly. This extends the use of modern motion planning algorithms that leverage such representations to changing environments. These planners rapidly and reliably optimize high-quality trajectories, without the burden of challenging nonconvex collision-avoidance constraints. We present an algorithm that inflates collision-free piecewise linear paths into sequences of convex sets (SCS) that are probabilistically collision-free using massive parallelism. We then integrate this algorithm into a motion planning pipeline, which leverages dynamic roadmaps to rapidly find one or multiple collision-free paths, and inflates them. We then optimize the trajectory through the probabilistically collision-free sets, simultaneously using the candidate trajectory to detect and remove collisions from the sets. We demonstrate the efficacy of our approach on a simulation benchmark and a KUKA iiwa 7 robot manipulator with perception in the loop. On our benchmark, our approach runs 17.1 times faster and yields a 27.9% increase in reliability over the nonlinear trajectory optimization baseline, while still producing high-quality motion plans.
The traveling salesman problem (TSP) famously asks for a shortest tour that a salesperson can take to visit a given set of cities in any order. In this paper, we ask how much faster $k \ge 2$ salespeople can visit the cities if they divide the task among themselves. We show that, in the two-dimensional Euclidean setting, two salespeople can always achieve a speedup of at least $\frac12 + \frac1\pi \approx 0.818$, for any given input, and there are inputs where they cannot do better. We also give (non-matching) upper and lower bounds for $k \geq 3$.
Accurately estimating decision boundaries in black box systems is critical when ensuring safety, quality, and feasibility in real-world applications. However, existing methods iteratively refine boundary estimates by sampling in regions of uncertainty, without providing guarantees on the closeness to the decision boundary and also result in unnecessary exploration that is especially disadvantageous when evaluations are costly. This paper presents the Epsilon-Neighborhood Decision-Boundary Governed Estimation (EDGE), a sample efficient and function-agnostic algorithm that leverages the intermediate value theorem to estimate the location of the decision boundary of a black box binary classifier within a user-specified epsilon-neighborhood. Evaluations are conducted on three nonlinear test functions and a case study of an electric grid stability problem with uncertain renewable power injection. The EDGE algorithm demonstrates superior sample efficiency and better boundary approximation than adaptive sampling techniques and grid-based searches.
We describe a framework that unifies the two types of polynomials introduced respectively by Bacher and Mouton and by Rutschmann and Wettstein to analyze the number of triangulations of point sets. Using this insight, we generalize the triangulation polynomials of chains to a wider class of near-edges, enabling efficient computation of the number of triangulations of certain families of point sets. We use the framework to try to improve the result in Rutschmann and Wettstein without success, suggesting that their result is close to optimal.
We consider the problem of packing a large square with nonoverlapping unit squares. Let $W(x)$ be the minimum wasted area when a large square of side length $x$ is packed with unit squares. In Roth and Vaughan's paper that proves the lower bound $W(x) \notin o(x^{1/2})$, a good square is defined to be a square with inclination at most $10^{-10}$ with respect to the large square. In this article, we prove that in calculating the asymptotic growth of the wasted space, it suffices to only consider packings with only good squares. This allows the lower bound proof in Roth and Vaughan's paper to be simplified by not having to handle bad squares.
Topological Data Analysis (TDA) combines computational topology and data science to extract and analyze intrinsic topological and geometric structures in data set in a metric space. While the persistent homology (PH), a widely used tool in TDA, which tracks the lifespan information of topological features through a filtration process, has shown its effectiveness in applications,it is inherently limited in homotopy invariants and overlooks finer geometric and combinatorial details. To bridge this gap, we introduce two novel commutative algebra-based frameworks which extend beyond homology by incorporating tools from computational commutative algebra : (1) \emph{the persistent ideals} derived from the decomposition of algebraic objects associated to simplicial complexes, like those in theory of edge ideals and Stanley--Reisner ideals, which will provide new commutative algebra-based barcodes and offer a richer characterization of topological and geometric structures in filtrations.(2)\emph{persistent chain complex of free modules} associated with traditional persistent simplicial complex by labelling each chain in the chain complex of the persistent simplicial complex with elements in a commutative ring, which will enable us to detect local information of the topology via some pure algebraic operations. \emph{Crucially, both of the two newly-established framework can recover topological information got from conventional PH and will give us more information.} Therefore, they provide new insights in computational topology, computational algebra and data science.
We introduce Masked Anchored SpHerical Distances (MASH), a novel multi-view and parametrized representation of 3D shapes. Inspired by multi-view geometry and motivated by the importance of perceptual shape understanding for learning 3D shapes, MASH represents a 3D shape as a collection of observable local surface patches, each defined by a spherical distance function emanating from an anchor point. We further leverage the compactness of spherical harmonics to encode the MASH functions, combined with a generalized view cone with a parameterized base that masks the spatial extent of the spherical function to attain locality. We develop a differentiable optimization algorithm capable of converting any point cloud into a MASH representation accurately approximating ground-truth surfaces with arbitrary geometry and topology. Extensive experiments demonstrate that MASH is versatile for multiple applications including surface reconstruction, shape generation, completion, and blending, achieving superior performance thanks to its unique representation encompassing both implicit and explicit features.
Lattice structures, distinguished by their customizable geometries at the microscale and outstanding mechanical performance, have found widespread application across various industries. One fundamental process in their design and manufacturing is constructing boundary representation (B-rep) models, which are essential for running advanced applications like simulation, optimization, and process planning. However, this construction process presents significant challenges due to the high complexity of lattice structures, particularly in generating nodal shapes where robustness and smoothness issues can arise from the complex intersections between struts. To address these challenges, this paper proposes a novel approach for lattice structure construction by cutting struts and filling void regions with subdivisional nodal shapes. Inspired by soap films, the method generates smooth, shape-preserving control meshes using Laplacian fairing and subdivides them through the point-normal Loop (PN-Loop) subdivision scheme to obtain subdivisional nodal shapes. The proposed method ensures robust model construction with reduced shape deviations, enhanced surface fairness, and smooth transitions between subdivisional nodal shapes and retained struts. The effectiveness of the method has been demonstrated by a series of examples and comparisons. The code will be open-sourced upon publication.
Levels and sublevels in arrangements -- and, dually, $k$-sets and $(\leq k)$-sets -- are fundamental notions in discrete and computational geometry and natural generalizations of convex polytopes, which correspond to the $0$-level. A long-standing conjecture of Eckhoff, Linhart, and Welzl, which would generalize McMullen's Upper Bound Theorem for polytopes and provide an exact refinement of asymptotic bounds by Clarkson, asserts that for all $k\leq \lfloor \frac{n-d-2}{2}\rfloor$, the number of $(\leq k)$-sets of a set $S$ of $n$ points in $\mathbf{R}^d$ is maximized if $S$ is the vertex set of a neighborly polytope. As a new tool for studying this conjecture and related problems, we introduce the $g$-matrix, which generalizes both the $g$-vector of a simple polytope and a Gale dual version of the $g$-vector studied by Lee and Welzl. Our main result is that the $g$-matrix of every vector configuration in $\mathbf{R}^3$ is non-negative, which implies the Eckhoff--Linhart--Welzl conjecture in the case where $d=n-4$. As a corollary, we obtain the following result about crossing numbers: Consider a configuration $V\subset S^2 \subset \mathbf{R}^3$ of $n$ unit vectors, and connect every pair of vectors by the unique shortest geodesic arc between them in the unit sphere $S^2$. This yields a drawing of the complete graph $K_n$ in $S^2$, which we call a spherical arc drawing. Complementing previous results for rectilinear drawings, we show that the number of crossings in any spherical arc drawing of $K_n$ is at least $\frac{1}{4}\lfloor \frac{n}{2}\rfloor \lfloor \frac{n-1}{2}\rfloor \lfloor \frac{n-2}{2}\rfloor \lfloor \frac{n-3}{2}\rfloor$, which equals the conjectured value of the crossing number of $K_n$. Moreover, the lower bound is attained if $V$ is coneighborly, i.e., if every open linear halfspace contains at least $\lfloor (n-2)/2 \rfloor$ of the vectors in $V$.
We study linear relations between face numbers of levels in arrangements. Let $V = \{ v_1, \ldots, v_n \} \subset \mathbf{R}^{r}$ be a vector configuration in general position, and let $\mathcal{A}(V)$ be polar dual arrangement of hemispheres in the $d$-dimensional unit sphere $S^d$, where $d=r-1$. For $0\leq s \leq d$ and $0 \leq t \leq n$, let $f_{s,t}(V)$ denote the number of faces of \emph{level} $t$ and dimension $d-s$ in the arrangement $\mathcal{A}(V)$ (these correspond to partitions $V=V_-\sqcup V_0 \sqcup V_+$ by linear hyperplanes with $|V_0|=s$ and $|V_-|=t$). We call the matrix $f(V):=[f_{s,t}(V)]$ the \emph{$f$-matrix} of $V$. Completing a long line of research on linear relations between face numbers of levels in arrangements, we determine, for every $n\geq r \geq 1$, the affine space $\mathfrak{F}_{n,r}$ spanned by the $f$-matrices of configurations of $n$ vectors in general position in $\mathbf{R}^r$; moreover, we determine the subspace $\mathfrak{F}^0_{n,r} \subset \mathfrak{F}_{n,r}$ spanned by all \emph{pointed} vector configurations (i.e., such that $V$ is contained in some open linear halfspace), which correspond to point sets in $\mathbf{R}^d$. This generalizes the classical fact that the Dehn--Sommerville relations generate all linear relations between the face numbers of simple polytopes (the faces at level $0$) and answers a question posed by Andrzejak and Welzl in 2003. The key notion for the statements and the proofs of our results is the $g$-matrix of a vector configuration, which determines the $f$-matrix and generalizes the classical $g$-vector of a polytope. By Gale duality, we also obtain analogous results for partitions of vector configurations by sign patterns of nontrivial linear dependencies, and for \emph{Radon partitions} of point sets in $\mathbf{R}^d$.
Inspired by the classical fractional cascading technique, we introduce new techniques to speed up the following type of iterated search in 3D: The input is a graph $\mathbf{G}$ with bounded degree together with a set $H_v$ of 3D hyperplanes associated with every vertex of $v$ of $\mathbf{G}$. The goal is to store the input such that given a query point $q\in \mathbb{R}^3$ and a connected subgraph $\mathbf{H}\subset \mathbf{G}$, we can decide if $q$ is below or above the lower envelope of $H_v$ for every $v\in \mathbf{H}$. We show that using linear space, it is possible to answer queries in roughly $O(\log n + |\mathbf{H}|\sqrt{\log n})$ time which improves trivial bound of $O(|\mathbf{H}|\log n)$ obtained by using planar point location data structures. Our data structure can in fact answer more general queries (it combines with shallow cuttings) and it even works when $\mathbf{H}$ is given one vertex at a time. We show that this has a number of new applications and in particular, we give improved solutions to a set of natural data structure problems that up to our knowledge had not seen any improvements. We believe this is a very surprising result because obtaining similar results for the planar point location problem was known to be impossible.
In this paper we show that two-dimensional nearest neighbor queries can be answered in optimal $O(\log n)$ time while supporting insertions in $O(\log^{1+\varepsilon}n)$ time. No previous data structure was known that supports $O(\log n)$-time queries and polylog-time insertions. In order to achieve logarithmic queries our data structure uses a new technique related to fractional cascading that leverages the inherent geometry of this problem. Our method can be also used in other semi-dynamic scenarios.