Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
This study introduces a mathematical framework to investigate the viability and reachability of production systems under constraints. We develop a model that incorporates key decision variables, such as pricing policy, quality investment, and advertising, to analyze short-term tactical decisions and long-term strategic outcomes. In the short term, we constructed a capture basin that defined the initial conditions under which production viability constraints were satisfied within the target zone. In the long term, we explore the dynamics of product quality and market demand to achieve and sustain the desired target. The Hamilton-Jacobi-Bellman (HJB) theory characterizes the capture basin and viability kernel using viscosity solutions of the HJB equation. This approach, which avoids controllability assumptions, is well suited to viability problems with specified targets. It provides managers with insights into maintaining production and inventory levels within viable ranges while considering product quality and evolving market demand. We numerically studied the HJB equation to design and test computational methods that validate the theoretical insights. Simulations offer practical tools for decision-makers to address operational challenges while aligning with the long-term sustainability goals. This study enhances the production system performance and resilience by linking rigorous mathematics with actionable solutions.
The consumption function maps current wealth and the exogenous state to current consumption. We prove the existence and uniqueness of a consumption function when the agent has a preference for wealth. When the period utility functions are restricted to power functions, we prove that the consumption function is asymptotically linear as wealth tends to infinity and provide a complete characterization of the asymptotic slopes. When the risk aversion with respect to wealth is less than that for consumption, the asymptotic slope is zero regardless of other model parameters, implying wealthy households save a large fraction of their income, consistent with empirical evidence.
This paper presents data-driven approaches for integrated assortment planning and inventory allocation that significantly improve fulfillment efficiency at JD.com, a leading E-commerce company. JD.com uses a two-level distribution network that includes regional distribution centers (RDCs) and front distribution centers (FDCs). Selecting products to stock at FDCs and then optimizing daily inventory allocation from RDCs to FDCs is critical to improving fulfillment efficiency, which is crucial for enhancing customer experiences. For assortment planning, we propose efficient algorithms to maximize the number of orders that can be fulfilled by FDCs (local fulfillment). For inventory allocation, we develop a novel end-to-end algorithm that integrates forecasting, optimization, and simulation to minimize lost sales and inventory transfer costs. Numerical experiments demonstrate that our methods outperform existing approaches, increasing local order fulfillment rates by 0.54% and our inventory allocation algorithm increases FDC demand satisfaction rates by 1.05%. Considering the high-volume operations of JD.com, with millions of weekly orders per region, these improvements yield substantial benefits beyond the company's established supply chain system. Implementation across JD.com's network has reduced costs, improved stock availability, and increased local order fulfillment rates for millions of orders annually.
We show that the existence of a strictly compatible pair of control Lyapunov and control barrier functions is equivalent to the existence of a single smooth Lyapunov function that certifies both asymptotic stability and safety. This characterization complements existing literature on converse Lyapunov functions by establishing a partial differential equation (PDE) characterization with prescribed boundary conditions on the safe set, ensuring that the safe set is exactly certified by this Lyapunov function. The result also implies that if a safety and stability specification cannot be certified by a single Lyapunov function, then any pair of control Lyapunov and control barrier functions necessarily leads to a conflict and cannot be satisfied simultaneously in a robust sense.
In this paper, we introduce three different classes of undergraduate research projects that implement model building and integer programming. These research projects focus on determining and analyzing solutions to the game The Genius Square, optimizing allocation of trains to maximize points in the game Ticket to Ride, and (code)breaking monoalphabetic substitution ciphers. Initial models and analyses for these scenarios that came from previous undergraduate research projects are shared along with a variety of open research questions.
We show that gradient dynamics can converge to any local minimum of a semi-algebraic function. Our results cover both discrete and continuous dynamics. For discrete gradient dynamics, we show that it can converge to any local minimum once the stepsize is nonsummable and sufficiently small, and the initial value is properly chosen.
The first part of this paper studies the evolution of gradient flow for homogeneous neural networks near a class of saddle points exhibiting a sparsity structure. The choice of these saddle points is motivated from previous works on homogeneous networks, which identified the first saddle point encountered by gradient flow after escaping the origin. It is shown here that, when initialized sufficiently close to such saddle points, gradient flow remains near the saddle point for a sufficiently long time, during which the set of weights with small norm remain small but converge in direction. Furthermore, important empirical observations are made on the behavior of gradient descent after escaping these saddle points. The second part of the paper, motivated by these results, introduces a greedy algorithm to train deep neural networks called Neuron Pursuit (NP). It is an iterative procedure which alternates between expanding the network by adding neuron(s) with carefully chosen weights, and minimizing the training loss using this augmented network. The efficacy of the proposed algorithm is validated using numerical experiments.
This paper solves and analyzes a trajectory optimization problem to deflect Earth-crossing objects (ECOs) employing continuous thrust obtained using a laser ablative system. The optimal control is determined for various initial ECO-Earth configurations to achieve the desired miss distance. The formulation incorporates the gravitational effect on the object due to the Earth using the patched-conic method. The constrained trajectory optimization problem is solved using Non-Linear Programming (NLP). First, the continuous control problem is solved, assuming both constant and variable power consumption, followed by a detailed comparison between the continuous control schemes. Subsequently, the work extends to studying sub-optimal solutions that can accommodate power fluctuations in the controller. The optimal control offers a range of alternative operational methods for asteroid deflection missions with trade-offs in power consumption and the total mission time. For impulsive deflection, the existing work reports two optimal solutions. One of the solutions is found to be better as it leads to a final ECO orbit that has its next Earth passage later than the other solution. Finally, the Moon's gravitational effect on the orbit of an ECO is studied. The reported results provide a comprehensive understanding of various scenarios in the process of ECO deflection.
We propose and analyze a randomization scheme for a general class of impulse control problems. The solution to this randomized problem is characterized as the fixed point of a compound operator which consists of a regularized nonlocal operator and a regularized stopping operator. This approach allows us to derive a semi-linear Hamilton-Jacobi-Bellman (HJB) equation. Through an equivalent randomization scheme with a Poisson compound measure, we establish a verification theorem that implies the uniqueness of the solution. Via an iterative approach, we prove the existence of the solution. The existence-and-uniqueness result ensures the randomized problem is well-defined. We then demonstrate that our randomized impulse control problem converges to its classical counterpart as the randomization parameter $\pmb \lambda$ vanishes. This convergence, combined with the value function's $C^{2,\alpha}_{loc}$ regularity, confirms our framework provides a robust approximation and a foundation for developing learning algorithms. Under this framework, we propose an offline reinforcement learning (RL) algorithm. Its policy improvement step is naturally derived from the iterative approach from the existence proof, which enjoys a geometric convergence rate. We implement a model-free version of the algorithm and numerically demonstrate its effectiveness using a widely-studied example. The results show that our RL algorithm can learn the randomized solution, which accurately approximates its classical counterpart. A sensitivity analysis with respect to the volatility parameter $\sigma$ in the state process effectively demonstrates the exploration-exploitation tradeoff.
In this paper, we propose a uniform semismooth Newton-based algorithmic framework called SSNCVX for solving a broad class of convex composite optimization problems. By exploiting the augmented Lagrangian duality, we reformulate the original problem into a saddle point problem and characterize the optimality conditions via a semismooth system of nonlinear equations. The nonsmooth structure is handled internally without requiring problem specific transformation or introducing auxiliary variables. This design allows easy modifications to the model structure, such as adding linear, quadratic, or shift terms through simple interface-level updates. The proposed method features a single loop structure that simultaneously updates the primal and dual variables via a semismooth Newton step. Extensive numerical experiments on benchmark datasets show that SSNCVX outperforms state-of-the-art solvers in both robustness and efficiency across a wide range of problems.
We propose and analyze a class of second-order dynamical systems for continuous-time optimization that incorporate fractional-order gradient terms. The system is given by \begin{equation} \ddot{x}(t) + \frac{\alpha}{t}\dot{x}(t) + \nabla^{\theta} f(x(t)) = 0, \end{equation} where $\theta \in (1,2)$, and the fractional operators are interpreted in the sense of Caputo, Riemann--Liouville, and Gr\"unwald--Letnikov derivatives. This formulation interpolates between memory effects of fractional dynamics and higher-order damping mechanisms, thereby extending the classical Nesterov accelerated flow into the fractional domain. A particular focus of our analysis is the regime $\alpha \leq 3$, and especially the critical case $\alpha = 3$, where the ordinary Nesterov flow fails to guarantee convergence. We show that in the fractional setting, convergence can still be established, with fractional gradient terms providing a stabilizing effect that compensates for the borderline damping. This highlights the ability of fractional dynamics to overcome fundamental limitations of classical second-order flows. We develop a convergence analysis framework for such systems by introducing fractional Opial-type lemmas and Lyapunov memory functionals. In the convex case, we establish weak convergence of trajectories toward the minimizer, as well as asymptotic decay of functional values. For strongly convex functions, we obtain explicit convergence rates that improve upon those of standard second-order flows
Neural network (NN) training is inherently a large-scale matrix optimization problem, yet the matrix structure of NN parameters has long been overlooked. Recently, the optimizer Muon \cite{jordanmuon}, which explicitly exploits this structure, has gained significant attention for its strong performance in foundation model training. A key component contributing to Muon's success is matrix orthogonalization. In this paper, we propose {\it low-rank orthogonalization}, which explicitly leverages the low-rank nature of gradients during NN training. Building on this, we propose low-rank matrix-signed gradient descent and a low-rank variant of Muon. Our numerical experiments demonstrate the superior performance of low-rank orthogonalization, with the low-rank Muon achieving promising results in GPT-2 and LLaMA pretraining -- surpassing the performance of the carefully tuned vanilla Muon. Theoretically, we establish the iteration complexity of the low-rank matrix-signed gradient descent for finding an approximate stationary solution, as well as that of low-rank Muon for finding an approximate stochastic stationary solution under heavy-tailed noise.
In this work, we introduce an interior-point method that employs tensor decompositions to efficiently represent and manipulate the variables and constraints of semidefinite programs, targeting problems where the solutions may not be low-rank but admit low-tensor-train rank approximations. Our method maintains approximate superlinear convergence despite inexact computations in the tensor format and leverages a primal-dual infeasible interior-point framework. In experiments on Maximum Cut, Maximum Stable Set, and Correlation Clustering, the tensor-train interior point method handles problems up to size $2^{12}$ with duality gaps around $10^{-6}$ in approximately 1.5~h and using less than 2~GB of memory, outperforming state-of-the-art solvers on larger instances. Moreover, numerical evidence indicates that tensor-train ranks of the iterates remain moderate along the interior-point trajectory, explaining the scalability of the approach. Tensor-train interior point methods offer a promising avenue for problems that lack traditional sparsity or low-rank structure, exploiting tensor-train structures instead.
This note presents a novel, efficient economic model predictive control (EMPC) scheme for non-dissipative systems subject to state and input constraints. A new conception of convergence filters is defined to address the stability issue of EMPC for constrained non-dissipative systems. Three convergence filters are designed accordingly to be imposed into the receding horizon optimization problem of EMPC. To improve online computational efficiency, the variable horizon idea without terminal constraints is adopted to compromise the convergence speed, economic performance, and computational burden of EMPC. Moreover, sufficient conditions are derived to guarantee the recursive feasibility and stability of the EMPC. The advantages of the proposed EMPC are validated by a classical non-dissipative continuous stirred-tank reactor.
Epidemic control frequently relies on adjusting interventions based on prevalence. But designing such policies is a highly non-trivial problem due to uncertain intervention effects, costs and the difficulty of quantifying key transmission mechanisms and parameters. Here, using exact mathematical and computational methods, we reveal a fundamental limit in epidemic control in that prevalence feedback policies are outperformed by a single optimally chosen constant control level. Specifically, we find no incentive to use prevalence based control under a wide class of cost functions that depend arbitrarily on interventions and scale with infections. We also identify regimes where prevalence feedback is beneficial. Our results challenge the current understanding that prevalence based interventions are required for epidemic control and suggest that, for many classes of epidemics, interventions should not be varied unless the epidemic is near the herd immunity threshold.
There is an increasing push for operational measures to reduce ships' bunker fuel consumption and carbon emissions, driven by the International Maritime Organization (IMO) mandates. Key performance indicators such as the Energy Efficiency Operational Indicator (EEOI) focus on fuel efficiency. Strategies like trim optimization, virtual arrival, and green routing have emerged. The theoretical basis for these approaches lies in accurate prediction of fuel consumption as a function of sailing speed, displacement, trim, climate, and sea state. This study utilized 296 voyage reports from a bulk carrier vessel over one year (November 16, 2021 to November 21, 2022) and 28 parameters, integrating hydrometeorological big data from the Copernicus Marine Environment Monitoring Service (CMEMS) with 19 parameters and the European Centre for Medium-Range Weather Forecasts (ECMWF) with 61 parameters. The objective was to evaluate whether fusing external public data sources enhances modeling accuracy and to highlight the most influential parameters affecting fuel consumption. The results reveal a strong potential for machine learning techniques to predict ship fuel consumption accurately by combining voyage reports with climate and sea data. However, validation on similar classes of vessels remains necessary to confirm generalizability.
We study constrained bi-matrix games, with a particular focus on low-rank games. Our main contribution is a framework that reduces low-rank games to smaller, equivalent constrained games, along with a necessary and sufficient condition for when such reductions exist. Building on this framework, we present three approaches for computing the set of extremal Nash equilibria, based on vertex enumeration, polyhedral calculus, and vector linear programming. Numerical case studies demonstrate the effectiveness of the proposed reduction and solution methods.
We perform a Lie symmetry analysis on the tempered-fractional Keller Segel (TFKS) system, a chemo-taxis model incorporating anomalous diffusion. A novel approach is used to handle the nonlocal nature of tempered fractional operators. By deriving the full set of Lie point symmetries and identifying the optimal one-dimensional subalgebras, we reduce the TFKS PDEs to ordinary differential equations (ODEs), yielding new exact solutions. These results offer insights into the long-term behavior and aggregation dynamics of the TFKS model and present a methodology applicable to other tempered fractional differential equations.