Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Aircraft aerodynamic design optimization must account for the varying operating conditions along the cruise segment as opposed to designing at one fixed operating condition, to arrive at more realistic designs. Conventional approaches address this by performing a ``multi-point'' optimization that assumes a weighted average of the objectives at a set of sub-segments along the cruise segment. We argue that since such multi-point approaches are, inevitably, biased by the specification of the weights, they can lead to sub-optimal designs. Instead, we propose to optimize the aircraft design at multiple sub-segments simultaneously -- that is, via multiobjective optimization that leads to a set of Pareto optimal solutions. However, existing work in multiobjective optimization suffers from (i) lack of sample efficiency (that is, keeping the number of function evaluations to convergence minimal), (ii) scalability with input dimensions and number of objectives, and (iii) the ability to generate a batch of iterates for synchronous parallel evaluations. To overcome these limitations, we propose a novel multiobjective Bayesian optimization methodology that demonstrates improved sample efficiency and accuracy compared to the state of the art. Inspired by Thompson sampling, our approach leverages Gaussian process surrogates and Bayesian decision theory to generate a sequence of iterates according to the probability that they are Pareto optimal. Our approach, named batch Pareto optimal Thompson sampling ($q\texttt{POTS}$), demonstrates superior empirical performance on a variety of synthetic experiments as well as a $24$ dimensional two-objective aerodynamic design optimization of the NASA common research model. We also provide open-source software of our methodology.
Earth observation satellites (EOS) play a pivotal role in capturing and analyzing planetary phenomena, ranging from natural disasters to societal development. The EOS scheduling problem (EOSSP), which optimizes the schedule of EOS, is often solved with respect to nadir-directional EOS systems, thus restricting the observation time of targets and, consequently, the effectiveness of each EOS. This paper leverages state-of-the-art constellation reconfigurability to develop the reconfigurable EOS scheduling problem (REOSSP), wherein EOS are assumed to be maneuverable, forming a more optimal constellation configuration at multiple opportunities during a schedule. This paper develops a novel mixed-integer linear programming formulation for the REOSSP to optimally solve the scheduling problem for given parameters. Additionally, since the REOSSP can be computationally expensive for large-scale problems, a rolling horizon procedure (RHP) solution method is developed. The performance of the REOSSP is benchmarked against the EOSSP, which serves as a baseline, through a set of random instances where problem characteristics are varied and a case study in which Hurricane Sandy is used to demonstrate realistic performance. These experiments demonstrate the value of constellation reconfigurability in its application to the EOSSP, yielding solutions that improve performance, while the RHP enhances computational runtime for large-scale REOSSP instances.
We consider image registration as an optimal control problem using an optical flow formulation, i.e., we discuss an optimization problem that is governed by a linear hyperbolic transport equation. Requiring Lipschitz continuity of the vector fields that parametrize the transformation leads to an optimization problem in a non-reflexive Banach space. We introduce relaxations of the optimization problem involving smoothed maximum and minimum functions and appropriate Orlicz spaces. To derive well-posedness results for the relaxed optimization problem, we revisit and establish new existence and uniqueness results for the linear hyperbolic transport equations. We further discuss limit considerations with respect to the relaxation parameter and discretizations.
This paper proposes a novel hierarchical model predictive control (MPC) framework, called the Parent-Child MPC architecture, to steer nonlinear systems under uncertainty towards a target set, balancing computational complexity and guaranteeing recursive feasibility and stability without relying on conservative terminal constraints in online decision-making. By coupling a small-horizon Child MPC layer with one or more large-horizon Parent MPC layers, the architecture ensures recursive feasibility and stability through adjustable stage-wise constraints derived from tube-based control. As is demonstrated in our case studies, compared to traditional MPC methods, the proposed Parent-Child MPC architecture enhances performance and computational efficiency, reduces conservativeness, and enables scalable planning for certain nonlinear systems.
This work addresses the problem of linear system realizations by mammillary models, offering necessary and sufficient conditions under which a given transfer function can be represented in this form. While standard identifi cation techniques may yield transfer functions without an explicit connection to underlying physiological processes, com partmental models, particularly mammillary ones, reflect the physiological dynamics of distribution and elimination. This feature is especially relevant in clinical pharmacology, where model parameters must correspond to meaningful biological processes to support interpretability, personalization, and safe drug delivery, such as in total intravenous anesthesia. To conclude, an application to a propofol infusion model illustrates how mammillary realizations can support physiologically inter pretable system representations.
When developing a biotechnological process, the microorganism is first designed, e.g., using metabolic engineering. Then, the optimum fermentation parameters are determined on a laboratory scale, and lastly, they are transferred to the bioreactor scale. However, this step-by-step approach is costly and time-consuming and may result in suboptimal configurations. Herein, we present the bilevel optimization formulation SimulKnockReactor, which connects bioreactor design with microbial strain design, an extension of our previous formulation, SimulKnock (Ziegler et al., 2024, AIChE J.). At the upper (bioreactor) level, we minimize investment and operation costs for agitation, aeration, and pH control by determining the size and operating conditions of a continuous stirred-tank reactor - without selecting specific devices like the stirrer type. The lower (cellular) level is based on flux balance analysis and implements optimal reaction knockouts predicted by the upper level. Our results with a core and a genome-scale metabolic model of Escherichia coli show that the substrate is the largest cost factor. Our simultaneous approach outperforms a sequential approach using OptKnock. Namely, the knockouts proposed by OptKnock cannot guarantee the required production capacity in all cases considered. In the case that both approaches deliver feasible results, the total annual costs are the same or lower with SimulKnockReactor, highlighting the advantage of combining cellular and bioreactor levels. This work is a further step towards a fully integrated bioprocess design.
We study the optimal placement of an unlimited-capacity battery in power grids under a centralized market model, where the independent system operator (ISO) aims to minimize total generation costs through load shifting. The optimal battery placement is not well understood by the existing literature, especially regarding the influence of network topology on minimizing generation costs. Our work starts with decomposing the Mixed-Integer Linear Programming (MILP) problem into a series of Linear Programming (LP) formulations. For power grids with sufficiently large generation capacity or tree topologies, we derive analytical cost expressions demonstrating that, under reasonable assumptions, the weighted degree is the only topological factor for optimal battery placement. We also discuss the minor impact of higher-order topological conditions on tree-topology networks. To find the localized nature of a single battery's impact, we establish that the relative cost-saving benefit of a single battery decreases as the network scales. Furthermore, we design a low-complexity algorithm for weakly-cyclic networks. Numerical experiments show that our algorithm is not only approximately 100 times faster than commercial solvers but also maintains high accuracy even when some theoretical assumptions are relaxed.
We consider a min-max problem for strictly concave conservation laws on a 1-1 network, with inflow controls acting at the junction. We investigate the minimization problem for a functional measuring the total variation of the flow of the solutions at the node, among those solutions that maximize the time integral of the flux. To formulate this problem we establish a regularity result showing that the total variation of the boundary-flux of the solution of an initial-boundary value problem is controlled by the total variation of the initial datum and of the flux of the boundary datum. In the case the initial datum is monotone, we show that the flux of the entropy weak solution at the node provides an optimal inflow control for this min-max problem. We also exhibit two prototype examples showing that, in the case where the initial datum is not monotone, the flux of the entropy weak solution is no more optimal.
Uncertainties influencing the dynamical systems pose a significant challenge in estimating the achievable performance of a controller aiming to control such uncertain systems. When the uncertainties are of stochastic nature, obtaining hard guarantees for the robustness of a controller aiming to hedge against the uncertainty is not possible. This issue set the platform for the development of probabilistic robust control approaches. In this work, we utilise the gap metric between the known nominal model and the unknown perturbed model of the uncertain system as a tool to gauge the robustness of a controller and formulate the gap as a random variable in the setting with stochastic uncertainties. Main results of this paper includes giving probabilistic bound on the gap exceeding a known threshold followed by bounds on the expected gap value and probabilistic robust stability in terms of the gap metric. Further, we also provide a probabilistic controller performance certification under gap uncertainty and probabilistic guarantee on the achievable $\mathcal{H}_{\infty}$ robustness. Numerical simulations are provided at many places to demonstrate the proposed approach.
This paper presents a trust-based predictive multi-agent consensus protocol that analyses neighbours' anticipation data and makes coordination decisions. Agents in the network share their future predicted data over a finite look-ahead horizon with their neighbours and update their predictions in a rolling-horizon fashion. The prediction data is then used by agents to learn both the trust and the commitment traits exhibited by their neighbours over time. The proposed protocol is named as the Anticipatory Distributed Coordination (ADC) protocol. Lyapunov theory-based agreement convergence between agents is provided, followed by demonstrations using numerical simulations.
Policy iteration (PI) is a widely used algorithm for synthesizing optimal feedback control policies across many engineering and scientific applications. When PI is deployed on infinite-horizon, nonlinear, autonomous optimal-control problems, however, a number of significant theoretical challenges emerge - particularly when the computational state space is restricted to a bounded domain. In this paper, we investigate these challenges and show that the viability of PI in this setting hinges on the existence, uniqueness, and regularity of solutions to the Generalized Hamilton-Jacobi-Bellman (GHJB) equation solved at each iteration. To ensure a well-posed iterative scheme, the GHJB solution must possess sufficient smoothness, and the domain on which the GHJB equation is solved must remain forward-invariant under the closed-loop dynamics induced by the current policy. Although fundamental to the method's convergence, previous studies have largely overlooked these aspects. This paper closes that gap by introducing a constructive procedure that guarantees forward invariance of the computational domain throughout the entire PI sequence and by establishing sufficient conditions under which a suitably regular GHJB solution exists at every iteration. Numerical results are presented for a grid-based implementation of PI to support the theoretical findings.
This letter investigates the optimal allocation of large language model (LLM) inference workloads across heterogeneous edge data centers (DCs) over time. Each DC features on-site renewable generation and faces dynamic electricity prices and spatiotemporal variability in renewable availability. The central question is: how can inference workloads be optimally distributed to the DCs to minimize energy consumption, carbon emissions, and water usage while enhancing user experience? This letter proposes a novel optimization model for LLM service providers to reduce operational costs and environmental impacts. Numerical results validate the efficacy of the proposed approach.
We introduce a new solution concept for bounded rational agents in finite normal-form general-sum games called Generalized Quantal Response Equilibrium (GQRE) which generalizes Quantal Response Equilibrium~\citep{mckelvey1995quantal}. In our setup, each player maximizes a smooth, regularized expected utility of the mixed profiles used, reflecting bounded rationality that subsumes stochastic choice. After establishing existence under mild conditions, we present computationally efficient no-regret independent learning via smoothened versions of the Frank-Wolfe algorithm. Our algorithm uses noisy but correlated gradient estimates generated via a simulation oracle that reports on repeated plays of the game. We analyze convergence properties of our algorithm under assumptions that ensure uniqueness of equilibrium, using a class of gap functions that generalize the Nash gap. We end by demonstrating the effectiveness of our method on a set of complex general-sum games such as high-rank two-player games, large action two-player games, and known examples of difficult multi-player games.
This paper considers the problem of estimating a matrix that encodes pairwise distances in a finite metric space (or, more generally, the edge weight matrix of a network) under the barycentric coding model (BCM) with respect to the Gromov-Wasserstein (GW) distance function. We frame this task as estimating the unknown barycentric coordinates with respect to the GW distance, assuming that the target matrix (or kernel) belongs to the set of GW barycenters of a finite collection of known templates. In the language of harmonic analysis, if computing GW barycenters can be viewed as a synthesis problem, this paper aims to solve the corresponding analysis problem. We propose two methods: one utilizing fixed-point iteration for computing GW barycenters, and another employing a differentiation-based approach to the GW structure using a blow-up technique. Finally, we demonstrate the application of the proposed GW analysis approach in a series of numerical experiments and applications to machine learning.
Model Predictive Control (MPC)-based Reinforcement Learning (RL) offers a structured and interpretable alternative to Deep Neural Network (DNN)-based RL methods, with lower computational complexity and greater transparency. However, standard MPC-RL approaches often suffer from slow convergence, suboptimal policy learning due to limited parameterization, and safety issues during online adaptation. To address these challenges, we propose a novel framework that integrates MPC-RL with Multi-Objective Bayesian Optimization (MOBO). The proposed MPC-RL-MOBO utilizes noisy evaluations of the RL stage cost and its gradient, estimated via a Compatible Deterministic Policy Gradient (CDPG) approach, and incorporates them into a MOBO algorithm using the Expected Hypervolume Improvement (EHVI) acquisition function. This fusion enables efficient and safe tuning of the MPC parameters to achieve improved closed-loop performance, even under model imperfections. A numerical example demonstrates the effectiveness of the proposed approach in achieving sample-efficient, stable, and high-performance learning for control systems.
Designing satellite constellation systems involves complex multidisciplinary optimization in which coverage serves as a primary driver of overall system cost and performance. Among the various design considerations, constellation configuration -- how satellites are placed and distributed in space relative to each other -- predominantly determines the resulting coverage. In constellation configuration design, coverage can be considered either as an objective or a constraint, driven by mission objectives. State-of-the-art literature addresses each situation on a case-by-case basis, applying a unique set of assumptions, modeling, and solution methods. Although such a problem-based methodology is valuable, users often face implementation challenges when performing trade-off studies across different mission scenarios, as each scenario must be handled distinctly. In response, we propose a unifying framework consisting of five mixed-integer linear program formulations that are of practical significance, extensible to more complex mission narratives using additional constraints, and capable of obtaining provably optimal constellation configurations. It can handle various metrics and mission scenarios, such as percent coverage, average or maximum revisit times, fixed number of satellites, spatiotemporally varying coverage requirements, and ground-, aerial-, or space-based, static or mobile targets. The paper presents several add-ons, case studies, and comparative analyses to demonstrate the versatility of the proposed framework.
As both model and dataset sizes continue to scale rapidly, conventional pretraining strategies with fixed compute budgets-such as cosine learning rate schedules-are increasingly inadequate for large-scale training. Recent alternatives, including warmup-stable-decay (WSD) schedules and weight averaging, offer greater flexibility. However, WSD relies on explicit decay phases to track progress, while weight averaging addresses this limitation at the cost of additional memory. In search of a more principled and scalable alternative, we revisit the Schedule-Free (SF) method [Defazio et al., 2024], which has shown strong empirical performance across diverse settings. We show that SF-AdamW effectively navigates the "river" structure of the loss landscape without decay phases or auxiliary averaging, making it particularly suitable for continuously scaling training workloads. To understand this behavior, we conduct a theoretical and empirical analysis of SF dynamics, revealing that it implicitly performs weight averaging without memory overhead. Guided by this analysis, we propose a refined variant of SF that improves robustness to momentum and performs better under large batch sizes, addressing key limitations of the original method. Together, these results establish SF as a practical, scalable, and theoretically grounded approach for language model training.
In this paper, we focus on the problem of minimizing a continuously differentiable convex objective function $\min_x f(x)$. Recently, several adaptive gradient methods, including GRAAL (Malitsky, 2020), have been developed. These methods estimate the local curvature of the objective function to compute stepsizes, attain the standard convergence rate $\mathcal{O}(1/k)$ of fixed-stepsize gradient descent for Lipschitz-smooth functions, and do not require any line search procedures or hyperparameter tuning. However, a natural question arises: is it possible to accelerate the convergence of these algorithms to match the optimal rate $\mathcal{O}(1/k^2)$ of the accelerated gradient descent of Nesterov (1983)? Although some attempts have been made (Li and Lan, 2023), the capabilities of the existing accelerated algorithms to adapt to the curvature of the objective function are highly limited. Consequently, we provide a positive answer to this question and develop GRAAL with Nesterov acceleration. We prove that our algorithm achieves the desired optimal convergence rate for Lipschitz smooth functions. Moreover, in contrast to existing methods, it does so with an arbitrary, even excessively small, initial stepsize at the cost of a logarithmic additive term in the iteration complexity.