Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
This note introduces a unified theory for causal inference that integrates Riesz regression, covariate balancing, density-ratio estimation (DRE), targeted maximum likelihood estimation (TMLE), and the matching estimator in average treatment effect (ATE) estimation. In ATE estimation, the balancing weights and the regression functions of the outcome play important roles, where the balancing weights are referred to as the Riesz representer, bias-correction term, and clever covariates, depending on the context. Riesz regression, covariate balancing, DRE, and the matching estimator are methods for estimating the balancing weights, where Riesz regression is essentially equivalent to DRE in the ATE context, the matching estimator is a special case of DRE, and DRE is in a dual relationship with covariate balancing. TMLE is a method for constructing regression function estimators such that the leading bias term becomes zero. Nearest Neighbor Matching is equivalent to Least Squares Density Ratio Estimation and Riesz Regression.
The goal of policy learning is to train a policy function that recommends a treatment given covariates to maximize population welfare. There are two major approaches in policy learning: the empirical welfare maximization (EWM) approach and the plug-in approach. The EWM approach is analogous to a classification problem, where one first builds an estimator of the population welfare, which is a functional of policy functions, and then trains a policy by maximizing the estimated welfare. In contrast, the plug-in approach is based on regression, where one first estimates the conditional average treatment effect (CATE) and then recommends the treatment with the highest estimated outcome. This study bridges the gap between the two approaches by showing that both are based on essentially the same optimization problem. In particular, we prove an exact equivalence between EWM and least squares over a reparameterization of the policy class. As a consequence, the two approaches are interchangeable in several respects and share the same theoretical guarantees under common conditions. Leveraging this equivalence, we propose a novel regularization method for policy learning. Our findings yield a convex and computationally efficient training procedure that avoids the NP-hard combinatorial step typically required in EWM.
While the ordinary least squares estimator (OLSE) is still the most used estimator in linear regression models, other estimators can be more efficient when the error distribution is not Gaussian. In this paper, our goal is to evaluate this efficiency in the case of the Maximum Likelihood estimator (MLE) when the noise distribution belongs to a scale family. Under some regularity conditions, we show that (\beta_n,s_n), the MLE of the unknown regression vector \beta_0 and the scale s_0 exists and give the expression of the asymptotic efficiency of \beta_n over the OLSE. For given three scale families of densities, we quantify the true statistical gain of the MLE as a function of their deviation from the Gaussian family. To illustrate the theory, we present simulation results for different settings and also compare the MLE to the OLSE for the real market fish dataset.
Higher-Order Hypergraph Learning (HOHL) was recently introduced as a principled alternative to classical hypergraph regularization, enforcing higher-order smoothness via powers of multiscale Laplacians induced by the hypergraph structure. Prior work established the well- and ill-posedness of HOHL through an asymptotic consistency analysis in geometric settings. We extend this theoretical foundation by proving the consistency of a truncated version of HOHL and deriving explicit convergence rates when HOHL is used as a regularizer in fully supervised learning. We further demonstrate its strong empirical performance in active learning and in datasets lacking an underlying geometric structure, highlighting HOHL's versatility and robustness across diverse learning settings.
Forecast combination and model averaging have become popular tools in forecasting and prediction, both of which combine a set of candidate estimates with certain weights and are often shown to outperform single estimates. A data-driven method to determine combination/averaging weights typically optimizes a criterion under certain weight constraints. While a large number of studies have been devoted to developing and comparing various weight choice criteria, the role of weight constraints on the properties of combination forecasts is relatively less understood, and the use of various constraints in practice is also rather arbitrary. In this study, we summarize prevalent weight constraints used in the literature, and theoretically and numerically compare how they influence the properties of the combined forecast. Our findings not only provide a comprehensive understanding on the role of various weight constraints but also practical guidance for empirical researchers how to choose relevant constraints based on prior information and targets.
Given a noisy linear measurement $y = Ax + \xi$ of a distribution $p(x)$, and a good approximation to the prior $p(x)$, when can we sample from the posterior $p(x \mid y)$? Posterior sampling provides an accurate and fair framework for tasks such as inpainting, deblurring, and MRI reconstruction, and several heuristics attempt to approximate it. Unfortunately, approximate posterior sampling is computationally intractable in general. To sidestep this hardness, we focus on (local or global) log-concave distributions $p(x)$. In this regime, Langevin dynamics yields posterior samples when the exact scores of $p(x)$ are available, but it is brittle to score--estimation error, requiring an MGF bound (sub-exponential error). By contrast, in the unconditional setting, diffusion models succeed with only an $L^2$ bound on the score error. We prove that combining diffusion models with an annealed variant of Langevin dynamics achieves conditional sampling in polynomial time using merely an $L^4$ bound on the score error.
In this work we extend the results developed in 2022 for a sequential change detection algorithm making use of Page's CUSUM statistic, the empirical distribution as an estimate of the pre-change distribution, and a universal code as a tool for estimating the post-change distribution, from the i.i.d. case to the Markov setup.
This paper proposes a novel paradigm for machine learning that moves beyond traditional parameter optimization. Unlike conventional approaches that search for optimal parameters within a fixed geometric space, our core idea is to treat the model itself as a malleable geometric entity. Specifically, we optimize the metric tensor field on a manifold with a predefined topology, thereby dynamically shaping the geometric structure of the model space. To achieve this, we construct a variational framework whose loss function carefully balances data fidelity against the intrinsic geometric complexity of the manifold. The former ensures the model effectively explains observed data, while the latter acts as a regularizer, penalizing overly curved or irregular geometries to encourage simpler models and prevent overfitting. To address the computational challenges of this infinite-dimensional optimization problem, we introduce a practical method based on discrete differential geometry: the continuous manifold is discretized into a triangular mesh, and the metric tensor is parameterized by edge lengths, enabling efficient optimization using automatic differentiation tools. Theoretical analysis reveals a profound analogy between our framework and the Einstein-Hilbert action in general relativity, providing an elegant physical interpretation for the concept of "data-driven geometry". We further argue that even with fixed topology, metric optimization offers significantly greater expressive power than models with fixed geometry. This work lays a solid foundation for constructing fully dynamic "meta-learners" capable of autonomously evolving their geometry and topology, and it points to broad application prospects in areas such as scientific model discovery and robust representation learning.
We study the statistical properties of nonparametric distance-based (isotropic) local polynomial regression estimators of the boundary average treatment effect curve, a key causal functional parameter capturing heterogeneous treatment effects in boundary discontinuity designs. We present necessary and/or sufficient conditions for identification, estimation, and inference in large samples, both pointwise and uniformly along the boundary. Our theoretical results highlight the crucial role played by the ``regularity'' of the boundary (a one-dimensional manifold) over which identification, estimation, and inference are conducted. Our methods are illustrated with simulated data. Companion general-purpose software is provided.
We establish a rigorous asymptotic theory for the joint estimation of roughness and scale parameters in two-dimensional Gaussian random fields with power-law generalized covariances \cite{Matheron1973, Stein1999, Yaglom1987}. Our main results are bivariate central limit theorems for a class of method-of-moments estimators under increasing-domain and fixed-domain asymptotics. The fixed-domain result follows immediately from the increasing-domain result from the self-similarity of Gaussian random fields with power-law generalized covariances \cite{IstasLang1997, Coeurjolly2001, ZhuStein2002}. These results provide a unified distributional framework across these two classical regimes \cite{AvramLeonenkoSakhno2010-ESAIM, BiermeBonamiLeon2011-EJP} that makes the unusual behavior of the estimates under fixed-domain asymptotics intuitively obvious. Our increasing-domain asymptotic results use spatial averages of quadratic forms of (iterated) bilinear product difference filters that yield explicit expressions for the estimates of roughness and scale to which existing theorems on such averages \cite{BreuerMajor1983,Hannan1970} can be readily applied. We further show that the asymptotics remain valid under modestly irregular sampling due to jitter or missing observations. For the fixed-domain setting, the results extend to models that behave sufficiently like the power-law model at high frequencies such as the often used Mat\'ern model \cite{ZhuStein2006, WangLoh2011EJS, KaufmanShaby2017EJS}.
We study the recovery of the distribution function $F_X$ of a random variable $X$ that is subject to an independent additive random error $\varepsilon$. To be precise, it is assumed that the target variable $X$ is available only in the form of a blurred surrogate $Y = X + \varepsilon$. The distribution function $F_Y$ then corresponds to the convolution of $F_X$ and $F_\varepsilon$, so that the reconstruction of $F_X$ is some kind of deconvolution problem. Those have a long history in mathematics and various approaches have been proposed in the past. Most of them use integral transforms or matrix algorithms. The present article avoids these tools and is entirely confined to the domain of distribution functions. Our main idea relies on a transformation of a first kind to a second kind integral equation. Thereof, starting with a right-lateral discrete target and error variable, a representation for $F_X$ in terms of available quantities is obtained, which facilitates the unbiased estimation through a $Y$-sample. It turns out that these results even extend to cases in which $X$ is not discrete. Finally, in a general setup, our approach gives rise to an approximation for $F_X$ as a certain Neumann sum. The properties of this sum are briefly examined theoretically and visually. The paper is concluded with a short discussion of operator theoretical aspects and an outlook on further research. Various plots underline our results and illustrate the capabilities of our functions with regard to estimation.
Low-rank pseudoinverses are widely used to approximate matrix inverses in scalable machine learning, optimization, and scientific computing. However, real-world matrices are often observed with noise, arising from sampling, sketching, and quantization. The spectral-norm robustness of low-rank inverse approximations remains poorly understood. We systematically study the spectral-norm error $\| (\tilde{A}^{-1})_p - A_p^{-1} \|$ for an $n\times n$ symmetric matrix $A$, where $A_p^{-1}$ denotes the best rank-\(p\) approximation of $A^{-1}$, and $\tilde{A} = A + E$ is a noisy observation. Under mild assumptions on the noise, we derive sharp non-asymptotic perturbation bounds that reveal how the error scales with the eigengap, spectral decay, and noise alignment with low-curvature directions of $A$. Our analysis introduces a novel application of contour integral techniques to the \emph{non-entire} function $f(z) = 1/z$, yielding bounds that improve over naive adaptations of classical full-inverse bounds by up to a factor of $\sqrt{n}$. Empirically, our bounds closely track the true perturbation error across a variety of real-world and synthetic matrices, while estimates based on classical results tend to significantly overpredict. These findings offer practical, spectrum-aware guarantees for low-rank inverse approximations in noisy computational environments.
We consider the problem of density estimation in the context of multiscale Langevin diffusion processes, where a single-scale homogenized surrogate model can be derived. In particular, our aim is to learn the density of the invariant measure of the homogenized dynamics from a continuous-time trajectory generated by the full multiscale system. We propose a spectral method based on a truncated Fourier expansion with Hermite functions as orthonormal basis. The Fourier coefficients are computed directly from the data owing to the ergodic theorem. We prove that the resulting density estimator is robust and converges to the invariant density of the homogenized model as the scale separation parameter vanishes, provided the time horizon and the number of Fourier modes are suitably chosen in relation to the multiscale parameter. The accuracy and reliability of this methodology is further demonstrated through a series of numerical experiments.
We consider a stochastic multi-armed bandit problem with i.i.d. rewards where the expected reward function is multimodal with at most m modes. We propose the first known computationally tractable algorithm for computing the solution to the Graves-Lai optimization problem, which in turn enables the implementation of asymptotically optimal algorithms for this bandit problem. The code for the proposed algorithms is publicly available at https://github.com/wilrev/MultimodalBandits
The empirical Orlicz norm based on a random sample is defined as a natural estimator of the Orlicz norm of a univariate probability distribution. A law of large numbers is derived under minimal assumptions. The latter extends readily to a linear and a nonparametric regression model. Secondly, sufficient conditions for a central limit theorem with a standard rate of convergence are supplied. The conditions for the CLT exclude certain canonical examples, such as the empirical sub-Gaussian norm of normally distributed random variables. For the latter, we discover a nonstandard rate of $n^{1/4} \log(n)^{-3/8}$, with a heavy-tailed, stable limit distribution. It is shown that in general, the empirical Orlicz norm does not admit any uniform rate of convergence for the class of distributions with bounded Orlicz norm.
We investigate the high-probability estimation of discrete distributions from an \iid sample under $\chi^2$-divergence loss. Although the minimax risk in expectation is well understood, its high-probability counterpart remains largely unexplored. We provide sharp upper and lower bounds for the classical Laplace estimator, showing that it achieves optimal performance among estimators that do not rely on the confidence level. We further characterize the minimax high-probability risk for any estimator and demonstrate that it can be attained through a simple smoothing strategy. Our analysis highlights an intrinsic separation between asymptotic and non-asymptotic guarantees, with the latter suffering from an unavoidable overhead. This work sharpens existing guarantees and advances the theoretical understanding of divergence-based estimation.
Hypergraphs provide a natural framework for modeling higher-order interactions, yet their theoretical underpinnings in semi-supervised learning remain limited. We provide an asymptotic consistency analysis of variational learning on random geometric hypergraphs, precisely characterizing the conditions ensuring the well-posedness of hypergraph learning as well as showing convergence to a weighted $p$-Laplacian equation. Motivated by this, we propose Higher-Order Hypergraph Learning (HOHL), which regularizes via powers of Laplacians from skeleton graphs for multiscale smoothness. HOHL converges to a higher-order Sobolev seminorm. Empirically, it performs strongly on standard baselines.
Correlation analysis is a fundamental step for extracting meaningful insights from complex datasets. In this paper, we investigate the problem of detecting correlation between two Erd\H{o}s-R\'enyi graphs $G(n,p)$, formulated as a hypothesis testing problem: under the null hypothesis, the two graphs are independent, while under the alternative hypothesis, they are correlated. We develop a polynomial-time test by counting bounded degree motifs and prove its effectiveness for any constant correlation coefficient $\rho$ when the edge connecting probability satisfies $p\ge n^{-2/3}$. Our results overcome the limitation requiring $\rho \ge \sqrt{\alpha}$, where $\alpha\approx 0.338$ is the Otter's constant, extending it to any constant $\rho$. Methodologically, bounded degree motifs -- ubiquitous in real networks -- make the proposed statistic both natural and scalable. We also validate our method on synthetic and real co-citation networks, further confirming that this simple motif family effectively captures correlation signals and exhibits strong empirical performance.