Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
We analyze a numerical method for computing Fredholm determinants of trace class and Hilbert Schmidt integral operators defined in terms of matrix-valued kernels on the entire real line. With this method, the Fredholm determinant is approximated by the determinant of a matrix constructed by truncating the kernel of the operator to a finite interval and then applying a quadrature rule. Under the assumption that the kernel decays exponentially, we derive an estimate relating the Fredholm determinant of the operator on the real line to that of its truncation to a finite interval. Then we derive a quadrature error estimate relating the Fredholm determinant of a matrix-valued kernel on a finite interval to its numerical approximation obtained via an adaptive composite Simpson's quadrature rule. These results extend the analysis of Bornemann which focused on Fredholm determinants of trace class operators defined by scalar-valued kernels on a finite interval. Numerical results are provided for a Birman-Schwinger operator that characterizes the stability of stationary solutions of nonlinear wave equations.
Underground infrastructure, such as pipelines and tunnels, can be vulnerable to the effect of transient ground deformation (TGD) caused by different vibration sources, including earthquakes and traffic loads. Current design methods are based on simple analytical models that idealize the soil movement as a traveling sinusoidal wave, neglecting both the system's inertia and the relative displacement at the soil-structure interface. However, this assumption may not be valid for buried large diameter pipelines and tunnels requiring accurate dynamic analysis. To analyse the dynamic response of a buried straight beam subjected to transverse TGD, this study develops a new semi-analytical model based on the Timoshenko beam on Winkler foundation theory. The closed-form analytical solution revealed that the vibration spectrum is divided in four parts, separated by three transition frequencies. Across each transition frequency, the oscillatory characteristics of the vibration modes change, significantly affecting the dynamic response of the system. To verify the validity of the proposed model, this work analyses the case study of a buried steel water pipeline of varying lengths and operating conditions, subjected to transverse TGD. Comparison of the obtained analytical solutions with the finite element analysis results showed excellent agreement between the two approaches. The frequency response analysis revealed dynamic amplification of the soil-structure interaction for forcing frequencies near the system's fundamental frequency. These may fall within the range of dominant frequencies characterizing seismic waves, requiring accurate dynamic analysis. The proposed methodology provides a robust analytical framework for evaluating the primary factors impacting the dynamic behavior of buried beams, giving a deeper understanding of the system response under various sources of ground vibration.
Porous electrodes are widely used in electrochemical systems, where accurately determining electric potentials, particularly overpotentials, is essential for understanding electrode behavior. At the macroscopic scale, porous electrodes are typically modeled using a dual-continuum approach, treating the porous solid phase and the liquid electrolyte as spatially superimposed domains. Determining potential distributions requires solving two Poisson equations that are nonlinearly coupled through Butler-Volmer kinetics under galvanostatic and potentiostatic operating modes. Under galvanostatic operation, these equations form an underconstrained singular system due to all-Neumann boundary conditions, posing numerical challenges. This paper systematically presents numerical methods for solving nonlinearly coupled Poisson equations in dual-continuum porous electrodes, with a particular focus on galvanostatic solutions. We mathematically establish solution uniqueness in terms of the potential difference between the electrode and electrolyte (or overpotential), as well as the individual potentials up to a shared constant shift. To resolve the nonuniqueness of the solution, we introduce three numerical approaches: (1) Lagrange Constrained Method (LCM), (2) Dirichlet Substitution Method (DSM), and (3) Global Constraining Method (GCM), where GCM enables solving the overpotential without imposing an explicit system reference potential. Additionally, we develop both decoupled and fully coupled nonlinear solution strategies and evaluate their computational performance in both homogeneous and heterogeneous conductivity cases. The presented numerical methods are general for addressing similar underconstrained nonlinear systems. A Python implementation is provided at https://github.com/harrywang1129/porous_electrode_solver.
This paper is concerned with solving the Helmholtz exterior Dirichlet and Neumann problems with large wavenumber $k$ and smooth obstacles using the standard second-kind boundary integral equations (BIEs) for these problems. We consider Galerkin and collocation methods - with subspaces consisting of $\textit{either}$ piecewise polynomials (in 2-d for collocation, in any dimension for Galerkin) $\textit{or}$ trigonometric polynomials (in 2-d) - as well as a fully discrete quadrature (a.k.a., Nystr\"om) method based on trigonometric polynomials (in 2-d). For each of these methods, we address the fundamental question: how quickly must $N$, the dimension of the approximation space, grow with $k$ to maintain accuracy as $k\to\infty$? For the methods involving piecewise-polynomials, we give sufficient conditions for $k$-uniform quasi-optimality. For the Galerkin method we prove that these are, in fact, necessary and sufficient. In particular, we prove that, when applied to the Neumann BIEs when the obstacle is a ball, the Galerkin method $\textit{suffers from the pollution effect}$; i.e., $N$ growing like $k^{d-1}$ is not sufficient for $k$-uniform quasi-optimality. For the Dirichlet BIEs, we prove that pollution occurs for the ball for certain choices of coupling parameter, and we give numerical experiments illustrating pollution for trapping domains with the standard coupling parameter. For all the methods involving trigonometric polynomials, we show that, up to potential factors of $k^\varepsilon$ for any $\varepsilon>0$, these methods do not suffer from the pollution effect (even for trapping obstacles). These are the first results about $k$-explicit convergence of collocation or Nystr\"om methods applied to the Dirichlet BIEs, and the first results about $k$-explicit convergence of any method used to solve the standard second-kind Neumann BIEs.
We present and analyse a new conforming space-time Galerkin discretisation of a semi-linear wave equation, based on a variational formulation derived from De Giorgi's elliptic regularisation viewpoint of the wave equation in second-order formulation. The method is shown to be well-posed through a minimisation approach, and also unconditionally stable for all choices of conforming discretisation spaces. Further, a priori error bounds are proven for sufficiently smooth solutions. Special attention is given to the conditioning of the method and its stable implementation. Numerical experiments are provided to validate the theoretical findings.
We consider discrete best approximation problems in the setting of tropical algebra that is concerned with the theory and application of algebraic systems with idempotent operations. Given a set of input-output pairs of an unknown function defined on a tropical semifield, the problem is to determine an approximating rational function formed by two Puiseux polynomials as numerator and denominator. With specified numbers of monomials in both polynomials, the approximation aims at evaluating the exponent and coefficient for each monomial in the polynomials to fit the rational function to the data in the sense of a tropical distance function. To solve the problem, we transform it into approximation of a vector equation with unknown vectors on both sides, where one side corresponds to the numerator polynomial and the other side to the denominator. Each side involves a matrix with entries dependent on the unknown exponents, multiplied by the vector of unknown coefficients of monomials. We propose an algorithm that constructs a series of approximate solutions by alternately fixing one side of the equation to an already found result and leaving the other intact. Each equation obtained is approximated with respect to the vector of coefficients, which yields a vector of coefficients and approximation error both parameterized by the exponents. The exponents are found by minimizing the error with an optimization procedure based on agglomerative clustering technique. To illustrate, we present results for approximation problems in terms of max-plus algebra (a real semifield with addition defined as maximum and multiplication as arithmetic addition), which correspond to ordinary problems of piecewise linear approximation of real functions. As our numerical experience shows, the proposed algorithm converges in a finite number of steps and provides a reasonable solution to the problems considered.
In this work, we introduce a quadratically convergent and dynamically consistent integrator specifically designed for the replicator dynamics. The proposed scheme combines a two-stage rational approximation with a normalization step to ensure confinement to the probability simplex and unconditional preservation of non-negativity, invariant sets and equilibria. A rigorous convergence analysis is provided to establish the scheme's second-order accuracy, and an embedded auxiliary method is devised for adaptive time-stepping based on local error estimation. Furthermore, a discrete analogue of the quotient rule, which governs the evolution of component ratios, is shown to hold. Numerical experiments validate the theoretical results, illustrating the method's ability to reproduce complex dynamics and to outperform well-established solvers in particularly challenging scenarios.
This work demonstrates a methodology for using deep learning to discover simple, practical criteria for classifying matrices based on abstract algebraic properties. By combining a high-performance neural network with explainable AI (XAI) techniques, we can distill a model's learned strategy into human-interpretable rules. We apply this approach to the challenging case of monotone matrices, defined by the condition that their inverses are entrywise nonnegative. Despite their simple definition, an easy characterization in terms of the matrix elements or the derived parameters is not known. Here, we present, to the best of our knowledge, the first systematic machine-learning approach for deriving a practical criterion that distinguishes monotone from non-monotone matrices. After establishing a labelled dataset by randomly generated monotone and non-monotone matrices uniformly on $(-1,1)$, we employ deep neural network algorithms for classifying the matrices as monotone or non-monotone, using both their entries and a comprehensive set of matrix features. By saliency methods, such as integrated gradients, we identify among all features, two matrix parameters which alone provide sufficient information for the matrix classification, with $95\%$ accuracy, namely the absolute values of the two lowest-order coefficients, $c_0$ and $c_1$ of the matrix's characteristic polynomial. A data-driven study of 18,000 random $7\times7$ matrices shows that the monotone class obeys $\lvert c_{0}/c_{1}\rvert\le0.18$ with probability $>99.98\%$; because $\lvert c_{0}/c_{1}\rvert = 1/\mathrm{tr}(A^{-1})$ for monotone $A$, this is equivalent to the simple bound $\mathrm{tr}(A^{-1})\ge5.7$.
A surrogate-based topology optimisation algorithm for linear elastic structures under parametric loads and boundary conditions is proposed. Instead of learning the parametric solution of the state (and adjoint) problems or the optimisation trajectory as a function of the iterations, the proposed approach devises a surrogate version of the entire optimisation pipeline. First, the method predicts a quasi-optimal topology for a given problem configuration as a surrogate model of high-fidelity topologies optimised with the homogenisation method. This is achieved by means of a feed-forward net learning the mapping between the input parameters characterising the system setup and a latent space determined by encoder/decoder blocks reducing the dimensionality of the parametric topology optimisation problem and reconstructing a high-dimensional representation of the topology. Then, the predicted topology is used as an educated initial guess for a computationally efficient algorithm penalising the intermediate values of the design variable, while enforcing the governing equations of the system. This step allows the method to correct potential errors introduced by the surrogate model, eliminate artifacts, and refine the design in order to produce topologies consistent with the underlying physics. Different architectures are proposed and the approximation and generalisation capabilities of the resulting models are numerically evaluated. The quasi-optimal topologies allow to outperform the high-fidelity optimiser by reducing the average number of optimisation iterations by $53\%$ while achieving discrepancies below $4\%$ in the optimal value of the objective functional, even in the challenging scenario of testing the model to extrapolate beyond the training and validation domain.
We study the fully-developed, time-periodic motion of a shear-dependent non-Newtonian fluid with variable exponent rheology through an infinite pipe $\Omega:= \mathbb{R}\times \Sigma\subseteq \mathbb{R}^d$, $d\in \{2,3\}$, of arbitrary cross-section $\Sigma\subseteq \mathbb{R}^{d-1}$. The focus is on a generalized $p(\cdot)$-fluid model, where the power-law index is position-dependent (with respect to $\Sigma$), $\textit{i.e.}$, a function $p\colon \Sigma\to (1,+\infty)$. We prove the existence of time-periodic solutions with either assigned time-periodic flow-rate or pressure-drop, generalizing known results for the Navier-Stokes and for $p$-fluid equations. In addition, we identify explicit solutions, relevant as benchmark cases, especially for electro-rheological fluids or, more generally, $\textit{`smart fluids'}$. To support practical applications, we present a fully-constructive existence proof for variational solutions by means of a fully-discrete finite-differences/-elements discretization, consistent with our numerical experiments. Our approach, which unifies the treatment of all values of $p(\overline{x})\in (1,+\infty)$, $\overline{x}\in \Sigma$, without requiring an auxiliary Newtonian term, provides new insights even in the constant exponent case. The theoretical findings are reviewed by means of numerical experiments.
An inherent regularization strategy and block Schur complement preconditioning are studied for linear poroelasticity problems discretized using the lowest-order weak Galerkin FEM in space and the implicit Euler scheme in time. At each time step, the resulting saddle point system becomes nearly singular in the locking regime, where the solid is nearly incompressible. This near-singularity stems from the leading block, which corresponds to a linear elasticity system. To enable efficient iterative solution, this nearly singular system is first reformulated as a saddle point problem and then regularized by adding a term to the (2,2) block. This regularization preserves the solution while ensuring the non-singularity of the new system. As a result, block Schur complement preconditioning becomes effective. It is shown that the preconditioned MINRES and GMRES converge essentially independent of the mesh size and the locking parameter. Both two- and three-field formulations are considered for the iterative solution of the linear poroelasticity. The efficient solution of the two-field formulation builds upon the effective iterative solution of linear elasticity. For this case, MINRES and GMRES achieve parameter-free convergence when used with block Schur complement preconditioning, where the inverse of the leading block leverages efficient solvers for linear elasticity. The poroelasticity problem can also be reformulated as a three-field system by introducing a numerical pressure variable into the linear elasticity part. The inherent regularization strategy extends naturally to this formulation, and preconditioned MINRES and GMRES also show parameter-free convergence for the regularized system. Numerical experiments in both two and three dimensions confirm the effectiveness of the regularization strategy and the robustness of the block preconditioners.
We develop an entropy-stable nodal discontinuous Galerkin (DG) scheme for the Euler equations with gravity, which is also well-balanced with respect to general equilibrium solutions, including both hydrostatic and moving equilibria. The core of our approach lies in a novel treatment of the gravitational source term, combining entropy-conservative numerical fluxes with a linear entropy correction. In addition, the proposed formulation is carefully designed to ensure compatibility with a positivity-preserving limiter. We provide a rigorous theoretical analysis to establish the accuracy and structure-preserving properties of the proposed scheme. Extensive numerical experiments confirm the robustness and efficiency of the scheme.
In this paper, we present a fast multipole method (FMM) for solving the two-dimensional Laplace equation in a half-plane with Robin boundary conditions. The method is based on a novel expansion theory for the reaction component of the Green's function. By applying the Fourier transform, the reaction field component is obtained in a Sommerfeld-type integral form. We derive far-field approximations and corresponding shifting and translation operators from the Fourier integral representation. The FMM for the reaction component is then developed by using the new far-field approximations incorporated into the classic FMM framework in which the tree structure is constructed from the original and image charges. Combining this with the standard FMM for the free-space components, we develop a fast algorithm to compute the interaction of the half plane Laplace Green's function. We prove that the method exhibits exponential convergence, similar to the free-space FMM. Finally, numerical examples are presented to validate the theoretical results and demonstrate that the FMM achieves $O(N)$ computational complexity.
Numerous applications, such as Krylov subspace solvers, make extensive use of the block classical Gram-Schmidt (BCGS) algorithm and its reorthogonalized variants for orthogonalizing a set of vectors. For large-scale problems in distributed memory settings, the communication cost, particularly the global synchronization cost, is a major performance bottleneck. In recent years, many low-synchronization BCGS variants have been proposed in an effort to reduce the number of synchronization points. The work [E. Carson, Y. Ma, arXiv preprint 2411.07077] recently proposed stable one-synchronization and two-synchronization variants of BCGS, i.e., BCGSI+P-1S and BCGSI+P-2S. In this work, we evaluate the performance of BCGSI+P-1S and BCGSI+P-2S on a distributed memory system compared to other well-known low-synchronization BCGS variants. In comparison to the classical reorthogonalized BCGS algorithm (BCGSI+), numerical experiments demonstrate that BCGSI+P-1S and BCGSI+P-2S can achieve up to 4 times and 2 times speedups, respectively, and perform similarly to other (less stable) one-synchronization and two-synchronization variants. BCGSI+P-1S and BCGSI+P-2S are therefore recommended as the best choice in practice for computing an economic QR factorization on distributed memory systems due to their superior stability when compared to other variants with the same synchronization cost.
Spectral methods for solving partial differential equations (PDEs) and stochastic partial differential equations (SPDEs) often use Fourier or polynomial spectral expansions on either uniform and non-uniform grids. However, while very widely used, especially for slowly-varying solutions, non-uniform spatial grids can give larger spatial discretization errors if the solutions change rapidly in space. Here, we implement a Fourier method that employs fast trigonometric expansions on a uniform grid with non-periodic boundaries using fast discrete sine transforms (DST) or/and discrete cosine transforms (DCT) to solve parabolic PDEs. We implement this method in two ways: either using a Fourier spectral derivative or a Fourier interaction picture approach. These methods can treat vector fields with a combination of Dirichlet and/or Neumann boundary conditions in one or more space dimensions. We use them to solve a variety of PDEs with analytical solutions, including the Peregrine solitary wave solution. For the 1D heat equation problem, our method with an interaction picture is accurate up to the machine precision. A soluble example of an SPDE with non-periodic boundaries is also treated. We compare the results obtained from these algorithms with those from publicly available solvers that use either polynomial spectral or finite element methods. For problems with solutions that vary rapidly in space, our method outperforms the other methods by recording lower spatial discretization errors, as well being faster in many cases, due to the efficiency improvements given by fast transforms.
We present a fully analytic approach for evaluating boundary integrals in two dimensions for Smoothed Particle Hydrodynamics (SPH). Conventional methods often rely on boundary particles or wall re-normalization approaches derived from applying the divergence theorem, whereas our method directly evaluates the area integrals for SPH kernels and gradients over triangular boundaries. This direct integration strategy inherently accommodates higher-order boundary conditions, such as piecewise cubic fields defined via Finite Element stencils, enabling analytic and flexible coupling with mesh-based solvers. At the core of our approach is a general solution for compact polynomials of arbitrary degree over triangles by decomposing the boundary elements into elementary integrals that can be solved with closed-form solutions. We provide a complete, closed-form solution for these generalized integrals, derived by relating the angular components to Chebyshev polynomials and solving the resulting radial integral via a numerically stable evaluation of the Gaussian hypergeometric function $_2F_1$. Our solution is robust and adaptable and works regardless of triangle geometries and kernel functions. We validate the accuracy against high-precision numerical quadrature rules, as well as in problems with known exact solutions. We provide an open-source implementation of our general solution using differentiable programming to facilitate the adoption of our approach to SPH and other contexts that require analytic integration over polygonal domains. Our analytic solution outperforms existing numerical quadrature rules for this problem by up to five orders of magnitude, for integrals and their gradients, while providing a flexible framework to couple arbitrary triangular meshes analytically to Lagrangian schemes, building a strong foundation for addressing several grand challenges in SPH and beyond.
This article aims to introduce the paradigm of distributional robustness from the field of convex optimization to tackle optimal design problems under uncertainty. We consider realistic situations where the physical model, and thereby the cost function of the design to be minimized depend on uncertain parameters. The probability distribution of the latter is itself known imperfectly, through a nominal law, reconstructed from a few observed samples. The distributionally robust optimal design problem is an intricate bilevel program which consists in minimizing the worst value of a statistical quantity of the cost function (typically, its expectation) when the law of the uncertain parameters belongs to a certain ``ambiguity set''. We address three classes of such problems: firstly, this ambiguity set is made of the probability laws whose Wasserstein distance to the nominal law is less than a given threshold; secondly, the ambiguity set is based on the first- and second-order moments of the actual and nominal probability laws. Eventually, a statistical quantity of the cost other than its expectation is made robust with respect to the law of the parameters, namely its conditional value at risk. Using techniques from convex duality, we derive tractable, single-level reformulations of these problems, framed over augmented sets of variables. Our methods are essentially agnostic of the optimal design framework; they are described in a unifying abstract framework, before being applied to multiple situations in density-based topology optimization and in geometric shape optimization. Several numerical examples are discussed in two and three space dimensions to appraise the features of the proposed techniques.
This paper investigates numerical methods for approximating the ground state of Bose--Einstein condensates (BECs) by introducing two relaxed formulations of the Gross--Pitaevskii energy functional. These formulations achieve first- and second-order accuracy with respect to the relaxation parameter \( \tau \), and are shown to converge to the original energy functional as \( \tau \to 0 \). A key feature of the relaxed functionals is their concavity, which ensures that local minima lie on the boundary of the concave hull. This property prevents energy increases during constraint normalization and enables the development of energy-dissipative algorithms. Numerical methods based on sequential linear programming are proposed, accompanied by rigorous analysis of their stability with respect to the relaxed energy. To enhance computational efficiency, an adaptive strategy is introduced, dynamically refining solutions obtained with larger relaxation parameters to achieve higher accuracy with smaller ones. Numerical experiments demonstrate the stability, convergence, and energy dissipation of the proposed methods, while showcasing the adaptive strategy's effectiveness in improving computational performance.