Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Wasserstein gradient flows have become a central tool for optimization problems over probability measures. A natural numerical approach is forward-Euler time discretization. We show, however, that even in the simple case where the energy functional is the Kullback-Leibler (KL) divergence against a smooth target density, forward-Euler can fail dramatically: the scheme does not converge to the gradient flow, despite the fact that the first variation $\nabla\frac{\delta F}{\delta\rho}$ remains formally well defined at every step. We identify the root cause as a loss of regularity induced by the discretization, and prove that a suitable regularization of the functional restores the necessary smoothness, making forward-Euler a viable solver that converges in discrete time to the global minimizer.
The Stochastic Weighted Particle Method (SWPM) of Rjasanow and Wagner is a generalization of the Direct Simulation Monte Carlo method for computing the probability density function of the velocities of a system of interacting particles for applications that include rarefied gas dynamics and plasma processing systems. Key components of a SWPM simulation are a particle grouping technique and particle reduction scheme. These are periodically applied to reduce the computational cost of simulations due to the gradual increase in the number of stochastic particles. A general framework for designing particle reduction schemes is introduced that enforces the preservation of a prescribed set of moments of the distribution through the construction and explicit solution of a system of linear equations for particle weights in terms of particle velocities and the moments to be preserved. This framework is applied to preserve all moments of the distribution up to order three. Numerical simulations are performed to verify the scheme and quantify the degree to which even higher-order moments and tail functionals are preserved. These results reveal an unexpected trade off between the preservation of these higher-order moments and tail functionals.
We introduce a three-dimensional (3D) fully tensor train (TT)-assembled isogeometric analysis (IGA) framework, TT-IGA, for solving partial differential equations (PDEs) on complex geometries. Our method reformulates IGA discrete operators into TT format, enabling efficient compression and computation while retaining geometric flexibility and accuracy. Unlike previous low-rank approaches that typically rely on structured domains, our framework accommodates general 3D geometries through low-rank TT representations of both the geometry mapping and the PDE discretization. We demonstrate the effectiveness of the proposed TT-IGA framework on the 3D Poisson equation, achieving substantial reductions in memory usage and computational cost without compromising solution quality.
We develop numerical schemes and sensitivity methods for stochastic models of proton transport that couple energy loss, range straggling and angular diffusion. For the energy equation we introduce a logarithmic Milstein scheme that guarantees positivity and achieves strong order one convergence. For the angular dynamics we construct a Lie-group integrator. The combined method maintains the natural geometric invariants of the system. We formulate dose deposition as a regularised path-dependent functional, obtaining a pathwise sensitivity estimator that is consistent and implementable. Numerical experiments confirm that the proposed schemes achieve the expected convergence rates and provide stable estimates of dose sensitivities.
Rooted trees are essential for describing numerical schemes via the so-called B-series. They have also been used extensively in rough analysis for expanding solutions of singular Stochastic Partial Differential Equations (SPDEs). When one considers scalar-valued equations, the most efficient combinatorial set is multi-indices. In this paper, we investigate the existence of intermediate combinatorial sets that will lie between multi-indices and rooted trees. We provide a negative result stating that there is no combinatorial set encoding elementary differentials in dimension $d\neq 1$, and compatible with the rooted trees and the multi-indices aside from the rooted trees. This does not close the debate of the existence of such combinatorial sets, but it shows that it cannot be obtained via a naive and natural approach.
In recent years, several numerical methods for solving the unique continuation problem for the wave equation in a homogeneous medium with given data on the lateral boundary of the space-time cylinder have been proposed. This problem enjoys Lipschitz stability if the geometric control condition is fulfilled, which allows devising optimally convergent numerical methods. In this article, we investigate whether these results carry over to the case in which the medium exhibits a jump discontinuity. Our numerical experiments suggest a positive answer. However, we also observe that the presence of discontinuities in the medium renders the computations far more demanding than in the homogeneous case.
This work considers to numerically solve a subdiffusion equation involving constant time delay $\tau$ and Riemann-Liouville fractional derivative. First, a fully discrete finite element scheme is developed for the considered problem under the symmetric graded time mesh, where the Caputo fractional derivative is approximated via the L1 formula, while the Riemann-Liouville integral is discretized using the fractional right rectangular rule. Under the assumption that the exact solution has low regularities at $t=0$ and $\tau$, the local truncation errors of both the L1 formula and the fractional right rectangular rule are analyzed. It is worth noting that, by setting the mesh parameter $r=1$, the symmetric graded time mesh will degenerate to a uniform mesh. Consequently, we proceed to discuss the stability and convergence of the proposed numerical scheme under two scenarios. For the uniform time mesh, by introducing a discrete sequence $\{P_k\}$, the unconditional stability and local time error estimate for the developed scheme is established. Conversely, on the symmetric graded time mesh, through the introduction of a discrete fractional Gronwall inequality, the stability and globally optimal time error estimate can be obtained. Finally, some numerical tests are presented to validate the theoretical results.
This paper presents a neural network--enhanced surrogate modeling approach for diffusion problems with spatially varying random field coefficients. The method builds on numerical homogenization, which compresses fine-scale coefficients into coarse-scale surrogates without requiring periodicity. To overcome computational bottlenecks, we train a neural network to map fine-scale coefficient samples to effective coarse-scale information, enabling the construction of accurate surrogates at the target resolution. This framework allows for the fast and efficient compression of new coefficient realizations, thereby ensuring reliable coarse models and supporting scalable computations for large ensembles of random coefficients. We demonstrate the efficacy of our approach through systematic numerical experiments for two classes of coefficients, emphasizing the influence of coefficient contrast: (i) lognormal diffusion coefficients, a standard model for uncertain subsurface structures in geophysics, and (ii) hierarchical Gaussian random fields with random correlation lengths.
Physics-Informed Neural Networks (PINNs) leverage machine learning with differential equations to solve direct and inverse problems, ensuring predictions follow physical laws. Physiologically based pharmacokinetic (PBPK) modeling advances beyond classical compartmental approaches by using a mechanistic, physiology focused framework. A PBPK model is based on a system of ODEs, with each equation representing the mass balance of a drug in a compartment, such as an organ or tissue. These ODEs include parameters that reflect physiological, biochemical, and drug-specific characteristics to simulate how the drug moves through the body. In this paper, we introduce PBPK-iPINN, a method to estimate drug-specific or patient-specific parameters and drug concentration profiles in PBPK brain compartment models using inverse PINNs. We demonstrate that, for the inverse problem to converge to the correct solution, the loss function components (data loss, initial conditions loss, and residual loss) must be appropriately weighted, and parameters (including number of layers, number of neurons, activation functions, learning rate, optimizer, and collocation points) must be carefully tuned. The performance of the PBPK-iPINN approach is then compared with established traditional numerical and statistical methods.
In this paper, we introduce an immersed $C^0$ interior penalty method for solving two-dimensional biharmonic interface problems on unfitted meshes. To accommodate the biharmonic interface conditions, high-order immersed finite element (IFE) spaces are constructed in the least-squares sense. We establish key properties of these spaces including unisolvency and partition of unity are, and verify their optimal approximation capability. These spaces are further incorporated into a modified $C^0$ interior penalty scheme with additional penalty terms on interface segments. The well-posedness of the discrete solution is proved. Numerical experiments with various interface geometries confirm optimal convergence of the proposed method in $L^2$, $H^1$ and $H^2$ norms.
We propose, analyze, and test an efficient splitting iteration for solving the incompressible, steady Navier-Stokes equations in the setting where partial solution data is known. The (possibly noisy) solution data is incorporated into a Picard-type solver via continuous data assimilation (CDA). Efficiency is gained over the usual Picard iteration through an algebraic splitting of Yosida-type that produces easier linear solves, and accuracy/consistency is shown to be maintained through the use of an incremental pressure and grad-div stabilization. We prove that CDA scales the Lipschitz constant of the associated fixed point operator by $H^{1/2}$, where $H$ is the characteristic spacing of the known solution data. This implies that CDA accelerates an already converging solver (and the more data, the more acceleration) and enables convergence of solvers in parameter regimes where the solver would fail (and the more data, the larger the parameter regime). Numerical tests illustrate the theory on several benchmark test problems and show that the proposed efficient solver gives nearly identical results in terms of number of iterations to converge; in other words, the proposed solver gives an efficiency gain with no loss in convergence rate.
Slow, viscous flow in branched structures arises in many biological and engineering settings. Direct numerical simulation of flow in such complicated multi-scale geometry, however, is a computationally intensive task. We propose a scattering theory framework that dramatically reduces this cost by decomposing networks into components connected by short straight channels. Exploiting the phenomenon of rapid return to Poiseuille flow (Saint-Venant's principle in the context of elasticity), we compute a high-order accurate scattering matrix for each component via boundary integral equations. These precomputed components can then be assembled into arbitrary branched structures, and the precomputed local solutions on each component can be assembled into an accurate global solution. The method is modular, has negligible cost, and appears to be the first full-fidelity solver that makes use of the return to Poiseuille flow phenomenon. In our two-dimensional examples, it matches the accuracy of full-domain solvers while requiring only a fraction of the computational effort.
The Neural Tangent Kernel (NTK) framework has provided deep insights into the training dynamics of neural networks under gradient flow. However, it relies on the assumption that the network is differentiable with respect to its parameters, an assumption that breaks down when considering non-smooth target functions or parameterized models exhibiting non-differentiable behavior. In this work, we propose a Nonlocal Neural Tangent Kernel (NNTK) that replaces the local gradient with a nonlocal interaction-based approximation in parameter space. Nonlocal gradients are known to exist for a wider class of functions than the standard gradient. This allows NTK theory to be extended to nonsmooth functions, stochastic estimators, and broader families of models. We explore both fixed-kernel and attention-based formulations of this nonlocal operator. We illustrate the new formulation with numerical studies.
We develop the framework for a non-intrusive, quadrature-based method for approximate balanced truncation (QuadBT) of linear systems with quadratic outputs, thus extending the applicability of QuadBT, which was originally designed for data-driven balanced truncation of standard linear systems with linear outputs only. The new approach makes use of the time-domain and frequency-domain quadrature-based representation of the system's infinite Gramians, only implicitly. We show that by sampling solely the extended impulse responses of the original system and their derivatives (or the corresponding transfer functions), we construct a reduced-order model that mimics the approximation quality of the intrusive (projection-based) balanced truncation. We validate the proposed framework on a numerical example.
We obtain the Green's function $G$ for any flat rhombic torus $T$, always with numerical values of significant digits up to the fourth decimal place (noting that $G$ is unique for $|T|=1$ and $\int_TGdA=0$). This precision is guaranteed by the strategies we adopt, which include theorems such as the Legendre Relation, properties of the Weierstra\ss\,P-Function, and also the algorithmic control of numerical errors. Our code uses complex integration routines developed by H. Karcher, who also introduced the symmetric P-Weierstra\ss\,function, and these resources simplify the computation of elliptic functions considerably.
In this paper, we examine the problem of sampling from log-concave distributions with (possibly) superlinear gradient growth under kinetic (underdamped) Langevin algorithms. Using a carefully tailored taming scheme, we propose two novel discretizations of the kinetic Langevin SDE, and we show that they are both contractive and satisfy a log-Sobolev inequality. Building on this, we establish a series of non-asymptotic bounds in $2$-Wasserstein distance between the law reached by each algorithm and the underlying target measure.
We revisit the problem of spectral clustering in multimodal settings, where each data modality is encoded as a graph Laplacian. While classical approaches--including joint diagonalization, spectral co-regularization, and multiview clustering--attempt to align embeddings across modalities, they often rely on costly iterative refinement and may fail to directly target the spectral subspace relevant for clustering. In this work, we introduce two key innovations. First, we bring the power of randomization to this setting by sampling random convex combinations of Laplacians as a simple and scalable alternative to explicit eigenspace alignment. Second, we propose a principled selection rule based on Bottom-$k$ Aggregated Spectral Energy (BASE)--a $k$-dimensional extension of the directional smoothness objective from recent minimax formulations--which we uniquely apply as a selection mechanism rather than an optimization target. The result is Randomized Joint Diagonalization with BASE Selection (RJD-BASE), a method that is easily implementable, computationally efficient, aligned with the clustering objective, and grounded in decades of progress in standard eigensolvers. Through experiments on synthetic and real-world datasets, we show that RJD-BASE reliably selects high-quality embeddings, outperforming classical multimodal clustering methods at low computational cost.
In this paper we formulate and analyse adaptive (space-time) least-squares finite element methods for the solution of convection-diffusion equations. The convective derivative $\mathbf{v} \cdot \nabla u$ is considered as part of the total time derivative $\frac{d}{dt}u = \partial_t u + \mathbf{v} \cdot \nabla u$, and therefore we can use a rather standard stability and error analysis for related space-time finite element methods. For stationary problems we restrict the ansatz space $H^1_0(\Omega)$ such that the convective derivative is considered as an element of the dual $H^{-1}(\Omega)$ of the test space $H^1_0(\Omega)$, which also allows unbounded velocities $\mathbf{v}$. While the discrete finite element schemes are always unique solvable, the numerical solutions may suffer from a bad approximation property of the finite element space when considering convection dominated problems, i.e., small diffusion coefficients. Instead of adding suitable stabilization terms, we aim to resolve the solutions by using adaptive (space-time) finite element methods. For this we introduce a least-squares approach where the discrete adjoint defines local a posteriori error indicators to drive an adaptive scheme. Numerical examples illustrate the theoretical considerations.