Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
This paper explores the optimal policy for using an allocated carbon emission budget over time with the objective to maximize profit, by explicitly taking into account present-biased preferences of decision-makers, accounting for time-inconsistent preferences. The setup can be adapted to be applicable for either a (present-biased) individual or also for a company which seeks a balance between production and emission schedules. In particular, we use and extend stochastic control techniques developed for optimal dividend strategies in insurance risk theory for the present purpose. The approach enables a quantitative analysis to assess the effects of present-bias, of sustainability awareness, and the efficiency of a potential carbon tax in a simplified model. In some numerical implementations, we illustrate in what way a higher degree of present-bias leads to excess emission patterns, while placing greater emphasis on sustainability reduces carbon emissions. Furthermore, we show that for low levels of carbon tax, its increase has a positive effect on curbing emissions, while beyond a certain threshold that marginal impact gets considerably weaker.
The covariance between the return of an asset and its realized volatility can be approximated as the difference between two specific implied volatilities. In this paper it is proved that in the small time-to-maturity limit the approximation error tends to zero. In addition a direct relation between the short time-to-maturity covariance and slope of the at-the-money implied volatility is established. The limit theorems are valid for stochastic volatility models with Hurst parameter $H \in(0, 1)$. An application of the results is to accurately approximate the Hurst parameter using only a discrete set of implied volatilities. Numerical examples under the rough Bergomi model are presented.
We study the excess growth rate -- a fundamental logarithmic functional arising in portfolio theory -- from the perspective of information theory. We show that the excess growth rate can be connected to the R\'{e}nyi and cross entropies, the Helmholtz free energy, L. Campbell's measure of average code length and large deviations. Our main results consist of three axiomatic characterization theorems of the excess growth rate, in terms of (i) the relative entropy, (ii) the gap in Jensen's inequality, and (iii) the logarithmic divergence that generalizes the Bregman divergence. Furthermore, we study maximization of the excess growth rate and compare it with the growth optimal portfolio. Our results not only provide theoretical justifications of the significance of the excess growth rate, but also establish new connections between information theory and quantitative finance.
We formalize the paradox of an omniscient yet lazy investor - a perfectly informed agent who trades infrequently due to execution or computational frictions. Starting from a deterministic geometric construction, we derive a closed-form expected profit function linking trading frequency, execution cost, and path roughness. We prove existence and uniqueness of the optimal trading frequency and show that this optimum can be interpreted through the fractal dimension of the price path. A stochastic extension under fractional Brownian motion provides analytical expressions for the optimal interval and comparative statics with respect to the Hurst exponent. Empirical illustrations on equity data confirm the theoretical scaling behavior.
This paper studies the time-inconsistent MV optimal stopping problem via a game-theoretic approach to find equilibrium strategies. To overcome the mathematical intractability of direct equilibrium analysis, we propose a vanishing regularization method: first, we introduce an entropy-based regularization term to the MV objective, modeling mixed-strategy stopping times using the intensity of a Cox process. For this regularized problem, we derive a coupled extended Hamilton-Jacobi-Bellman (HJB) equation system, prove a verification theorem linking its solutions to equilibrium intensities, and establish the existence of classical solutions for small time horizons via a contraction mapping argument. By letting the regularization term tend to zero, we formally recover a system of parabolic variational inequalities that characterizes equilibrium stopping times for the original MV problem. This system includes an additional key quadratic term--a distinction from classical optimal stopping, where stopping conditions depend only on comparing the value function to the instantaneous reward.
This study introduces an inverse behavioral optimization framework that integrates QALY-based health outcomes, ROI-driven incentives, and adaptive behavioral learning to quantify how policy design shapes national healthcare performance. Building on the FOSSIL (Flexible Optimization via Sample-Sensitive Importance Learning) paradigm, the model embeds a regret-minimizing behavioral weighting mechanism that enables dynamic learning from heterogeneous policy environments. It recovers latent behavioral sensitivities (efficiency, fairness, and temporal responsiveness T) from observed QALY-ROI trade-offs, providing an analytical bridge between individual incentive responses and aggregate system productivity. We formalize this mapping through the proposed System Impact Index (SII), which links behavioral elasticity to measurable macro-level efficiency and equity outcomes. Using OECD-WHO panel data, the framework empirically demonstrates that modern health systems operate near an efficiency-saturated frontier, where incremental fairness adjustments yield stabilizing but diminishing returns. Simulation and sensitivity analyses further show how small changes in behavioral parameters propagate into measurable shifts in systemic resilience, equity, and ROI efficiency. The results establish a quantitative foundation for designing adaptive, data-driven health incentive programs that dynamically balance efficiency, fairness, and long-run sustainability in national healthcare systems.
We study a goal-based portfolio selection problem in which an investor aims to meet multiple financial goals, each with a specific deadline and target amount. Trading the stock incurs a strictly positive transaction cost. Using the stochastic Perron's method, we show that the value function is the unique viscosity solution to a system of quasi-variational inequalities. The existence of an optimal trading strategy and goal funding scheme is established. Numerical results reveal complex optimal trading regions and show that the optimal investment strategy differs substantially from the V-shaped strategy observed in the frictionless case.
This paper presents an option pricing model that incorporates clustered jumps using a bivariate Hawkes process. The process captures both self- and cross-excitation of positive and negative jumps, enabling the model to generate return dynamics with asymmetric, time-varying skewness and to produce positive or negative implied volatility skews. This feature is especially relevant for assets such as cryptocurrencies, so-called ``meme'' stocks, G-7 currencies, and certain commodities, where implied volatility skews may change sign depending on prevailing sentiment. We introduce two additional parameters, namely the positive and negative jump premia, to model the market risk preferences for positive and negative jumps, inferred from options data. This enables the model to flexibly match observed skew dynamics. Using Bitcoin (BTC) options, we empirically demonstrate how inferred jump risk premia exhibit predictive power for both the cost of carry in BTC futures and the performance of delta-hedged option strategies.
In this paper, we investigate a portfolio selection problem with transaction costs under a two-factor stochastic volatility structure, where volatility follows a mean-reverting process with a stochastic mean-reversion level. The model incorporates both proportional exogenous transaction costs and endogenous costs modeled by a stochastic liquidity risk process. Using an option-implied approach, we extract an S-shaped utility function that reflects investor behavior and apply its concave envelope transformation to handle the non-concavity. The resulting problem reduces to solving a five-dimensional nonlinear Hamilton-Jacobi-Bellman equation. We employ a deep learning-based policy iteration scheme to numerically compute the value function and the optimal policy. Numerical experiments are conducted to analyze how both types of transaction costs and stochastic volatility affect optimal investment decisions.
We study a consumption-investment problem in a multi-asset market where the returns follow a generic rank-based model. Our main result derives an HJB equation with Neumann boundary conditions for the value function and proves a corresponding verification theorem. The control problem is nonstandard due to the discontinuous nature of the coefficients in rank-based models, requiring a bespoke approach of independent mathematical interest. The special case of first-order models, prescribing constant drift and diffusion coefficients for the ranked returns, admits explicit solutions when the investor is either (a) unconstrained, (b) abides by open market constraints or (c) is fully invested in the market. The explicit optimal strategies in all cases are related to the celebrated solution to Merton's problem, despite the intractability of constraint (b) in that setting.
This paper develops a novel framework for modeling the variance swap of multi-asset portfolios by employing the generalized variance approach, which utilizes the determinant of the covariance matrix of the underlying assets. By specifying the distribution of the log returns of the underlying assets under the Heston and Barndorff-Nielsen and Shephard (BNS) stochastic volatility frameworks, we derive closed-form solutions for the realized variance through the computation of the covariance generalization of multi-assets. To evaluate the robustness of the proposed model, we conduct simulations using nine different assets generated via the quantmod package. For a three-asset portfolio, analytical expressions for the multivariate variance swap are obtained under both the Heston and BNS models. Numerical experiments further demonstrate the effectiveness of the proposed model through parameter testing, calibration, and validation.
Traditional mean-field game (MFG) solvers operate on an instance-by-instance basis, which becomes infeasible when many related problems must be solved (e.g., for seeking a robust description of the solution under perturbations of the dynamics or utilities, or in settings involving continuum-parameterized agents.). We overcome this by training neural operators (NOs) to learn the rules-to-equilibrium map from the problem data (``rules'': dynamics and cost functionals) of LQ MFGs defined on separable Hilbert spaces to the corresponding equilibrium strategy. Our main result is a statistical guarantee: an NO trained on a small number of randomly sampled rules reliably solves unseen LQ MFG variants, even in infinite-dimensional settings. The number of NO parameters needed remains controlled under appropriate rule sampling during training. Our guarantee follows from three results: (i) local-Lipschitz estimates for the highly nonlinear rules-to-equilibrium map; (ii) a universal approximation theorem using NOs with a prespecified Lipschitz regularity (unlike traditional NO results where the NO's Lipschitz constant can diverge as the approximation error vanishes); and (iii) new sample-complexity bounds for $L$-Lipschitz learners in infinite dimensions, directly applicable as the Lipschitz constants of our approximating NOs are controlled in (ii).
This paper introduces a semi-analytical method for pricing American options on assets (stocks, ETFs) that pay discrete and/or continuous dividends. The problem is notoriously complex because discrete dividends create abrupt price drops and affect the optimal exercise timing, making traditional continuous-dividend models unsuitable. Our approach utilizes the Generalized Integral Transform (GIT) method introduced by the author and his co-authors in a number of papers, which transforms the pricing problem from a complex partial differential equation with a free boundary into an integral Volterra equation of the second or first kind. In this paper we illustrate this approach by considering a popular GBM model that accounts for discrete cash and proportional dividends using Dirac delta functions. By reframing the problem as an integral equation, we can sequentially solve for the option price and the early exercise boundary, effectively handling the discontinuities caused by the dividends. Our methodology provides a powerful alternative to standard numerical techniques like binomial trees or finite difference methods, which can struggle with the jump conditions of discrete dividends by losing accuracy or performance. Several examples demonstrate that the GIT method is highly accurate and computationally efficient, bypassing the need for extensive computational grids or complex backward induction steps.
Parallel physical information neural networks (P-PINNs) have been widely used to solve systems with multiple coupled physical fields, such as the coupled Stokes-Darcy equations with Beavers-Joseph-Saffman (BJS) interface conditions. However, excessively high or low physical constants in partial differential equations (PDE) often lead to ill conditioned loss functions and can even cause the failure of training numerical solutions for PINNs. To solve this problem, we develop a new kind of enhanced parallel PINNs, MF-PINNs, in this article. Our MF-PINNs combines the velocity pressure form (VP) with the stream-vorticity form (SV) and add them with adjusted weights to the total loss functions. The results of numerical experiments show our MF-PINNs have successfully improved the accuracy of the streamline fields and the pressure fields when kinematic viscosity and permeability tensor range from 1e-4 to 1e4. Thus, our MF-PINNs hold promise for more chaotic PDE systems involving turbulent flows. Additionally, we also explore the best combination of the activation functions and their periodicity. And we also try to set the initial learning rate and design its decay strategies. The code and data associated with this paper are available at https://github.com/shxshx48716/MF-PINNs.git.
Deep hedging uses recurrent neural networks to hedge financial products that cannot be fully hedged in incomplete markets. Previous work in this area focuses on minimizing some measure of quadratic hedging error by calculating pathwise gradients, but doing so requires large batch sizes and can make training effective models in a reasonable amount of time challenging. We show that by adding certain topological features, we can reduce batch sizes substantially and make training these models more practically feasible without greatly compromising hedging performance.
This paper provides necessary and sufficient conditions for a pair of randomised stopping times to form a saddle point of a zero-sum Dynkin game with partial and/or asymmetric information across players. The framework is non-Markovian and covers essentially any information structure. Our methodology relies on the identification of suitable super and submartingales involving players' equilibrium payoffs. Saddle point strategies are characterised in terms of the dynamics of those equilibrium payoffs and are related to their Doob-Meyer decompositions.
In this paper we study the short-maturity asymptotics of up-and-in barrier options under a broad class of stochastic volatility models. Our approach uses Malliavin calculus techniques, typically used for linear stochastic partial differential equations, to analyse the law of the supremum of the log-price process. We derive a concentration inequality and explicit bounds on the density of the supremum in terms of the time to maturity. These results yield an upper bound on the asymptotic decay rate of up-and-in barrier option prices as maturity vanishes. We further demonstrate the applicability of our framework to the rough Bergomi model and validate the theoretical results with numerical experiments.
In the context of time-subordinated Brownian motion models, Fourier theory and methodology are proposed to modelling the stochastic distribution of time increments. Gaussian Variance-Mean mixtures and time-subordinated models are reviewed with a key example being the Variance-Gamma process. A non-parametric characteristic function decomposition of subordinated Brownian motion is presented. The theory requires an extension of the real domain of certain characteristic functions to the complex plane, the validity of which is proven here. This allows one to characterise and study the stochastic time-change directly from the full process. An empirical decomposition of S\&P log-returns is provided to illustrate the methodology.