Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
The arbitrage gains or, equivalently, Loss Versus Rebalacing (LVR) for arbitrage between two imperfectly liquid markets is derived. To derive the LVR, I assume a quadratic trading cost to model the cost of trading on the more liquid exchange and discuss to which situations my model arguably applies well (long tail CEX-DEX arbitrage, DEX-DEX arbitrage) and to which not so well (CEX-DEX arbitrage for major pairs). I discuss extension to other cost functions and directions for future research.
Quantilized mean-field game models involve quantiles of the population's distribution. We study a class of such games with a capacity for ranking games, where the performance of each agent is evaluated based on its terminal state relative to the population's $\alpha$-quantile value, $\alpha \in (0,1)$. This evaluation criterion is designed to select the top $(1-\alpha)\%$ performing agents. We provide two formulations for this competition: a target-based formulation and a threshold-based formulation. In the former and latter formulations, to satisfy the selection condition, each agent aims for its terminal state to be \textit{exactly} equal and \textit{at least} equal to the population's $\alpha$-quantile value, respectively. For the target-based formulation, we obtain an analytic solution and demonstrate the $\epsilon$-Nash property for the asymptotic best-response strategies in the $N$-player game. Specifically, the quantilized mean-field consistency condition is expressed as a set of forward-backward ordinary differential equations, characterizing the $\alpha$-quantile value at equilibrium. For the threshold-based formulation, we obtain a semi-explicit solution and numerically solve the resulting quantilized mean-field consistency condition. Subsequently, we propose a new application in the context of early-stage venture investments, where a venture capital firm financially supports a group of start-up companies engaged in a competition over a finite time horizon, with the goal of selecting a percentage of top-ranking ones to receive the next round of funding at the end of the time horizon. We present the results and interpretations of numerical experiments for both formulations discussed in this context and show that the target-based formulation provides a very good approximation for the threshold-based formulation.
We assess the applicability of rough volatility models to Bitcoin realised volatility using the normalised p-variation framework of Cont and Das (2024). Applying this model free estimator to high-frequency Bitcoin data from 2017 to 2024 across multiple sampling resolutions, we find that the normalised statistic remains strictly negative throughout, precluding the estimation of a valid roughness index. Stationarity tests and robustness checks reveal no significant evidence of non-stationarity or structural breaks as explanatory factors. Instead, convergent evidence from three complementary diagnostics, namely multifractal detrended fluctuation analysis, log-log moment scaling, and wavelet leaders, reveals a multifractal structure in Bitcoin volatility. This scale-dependent behaviour violates the homogeneity assumptions underlying rough volatility estimation and accounts for the estimator's systematic failure. These findings suggest that while rough volatility models perform well in traditional markets, they are structurally misaligned with the empirical features of Bitcoin volatility.
We derive an explicit analytical approximation for the local volatility function in the Cheyette interest rate model, extending the classical Dupire framework to fixed-income markets. The result expresses local volatility in terms of time and strike derivatives of the Bachelier implied variance, naturally generalizes to multi-factor Cheyette models, and provides a practical tool for model calibration.
We introduce a new microeconomic model of horizontal differentiation that unifies and extends previous developments inspired by the seminal work of Hotelling (1929). Our framework incorporates boundedly rational consumers, an unlimited number of firms, and arbitrary differentiation spaces with Riemannian manifolds. We argue that Riemannian geometry provides a natural and powerful tool for analyzing such models, offering fresh insights into firm behavior and market structure with complex products.
Passive investing has gained immense popularity due to its low fees and the perceived simplicity of focusing on zero tracking error, rather than security selection. However, our analysis shows that the passive (zero tracking error) approach of waiting until the market close on the day of index reconstitution to purchase a stock (that was announced days earlier as an upcoming addition) results in costs amounting to hundreds of basis points compared to strategies that involve gradually acquiring a small portion of the required shares in advance with minimal additional tracking errors. In addition, we show that under all scenarios analyzed, a trader who builds a small inventory post-announcement and provides liquidity at the reconstitution event can consistently earn several hundreds of basis points in profit and often much more, assuming minimal risk.
We introduce a novel neural-network-based approach to learning the generating function $G(\cdot)$ of a functionally generated portfolio (FGP) from synthetic or real market data. In the neural network setting, the generating function is represented as $G_{\theta}(\cdot)$, where $\theta$ is an iterable neural network parameter vector, and $G_{\theta}(\cdot)$ is trained to maximise investment return relative to the market portfolio. We compare the performance of the Neural FGP approach against classical FGP benchmarks. FGPs provide a robust alternative to classical portfolio optimisation by bypassing the need to estimate drifts or covariances. The neural FGP framework extends this by introducing flexibility in the design of the generating function, enabling it to learn from market dynamics while preserving self-financing and pathwise decomposition properties.
In this paper we study the pricing and hedging of nonreplicable contingent claims, such as long-term insurance contracts like variable annuities. Our approach is based on the benchmark-neutral pricing framework of Platen (2024), which differs from the classical benchmark approach by using the stock growth optimal portfolio as the num\'eraire. In typical settings, this choice leads to an equivalent martingale measure, the benchmark-neutral measure. The resulting prices can be significantly lower than the respective risk-neutral ones, making this approach attractive for long-term risk-management. We derive the associated risk-minimizing hedging strategy under the assumption that the contingent claim possesses a martingale decomposition. For a set of nonreplicable contingent claims, these strategies allow monitoring the working capital required to generate their payoffs and enable an assessment of the resulting diversification effects. Furthermore, an algorithmic refinancing strategy is proposed that allows modeling the working capital. Finally, insurance-finance arbitrages of the first kind are introduced and it is demonstrated that benchmark-neutral pricing effectively avoids such arbitrages.
Despite significant advancements in machine learning for derivative pricing, the efficient and accurate valuation of American options remains a persistent challenge due to complex exercise boundaries, near-expiry behavior, and intricate contractual features. This paper extends a semi-analytical approach for pricing American options in time-inhomogeneous models, including pure diffusions, jump-diffusions, and Levy processes. Building on prior work, we derive and solve Volterra integral equations of the second kind to determine the exercise boundary explicitly, offering a computationally superior alternative to traditional finite-difference and Monte Carlo methods. We address key open problems: (1) extending the decomposition method, i.e. splitting the American option price into its European counterpart and an early exercise premium, to general jump-diffusion and Levy models; (2) handling cases where closed-form transition densities are unavailable by leveraging characteristic functions via, e.g., the COS method; and (3) generalizing the framework to multidimensional diffusions. Numerical examples demonstrate the method's efficiency and robustness. Our results underscore the advantages of the integral equation approach for large-scale industrial applications, while resolving some limitations of existing techniques.
The paper summarizes key results of the benchmark approach with a focus on the concept of benchmark-neutral pricing. It applies these results to the pricing of an extreme-maturity European put option on a well-diversified stock index. The growth optimal portfolio of the stocks is approximated by a well-diversified stock portfolio and modeled by a drifted time-transformed squared Bessel process of dimension four. It is shown that the benchmark-neutral price of a European put option is theoretically the minimal possible price and the respective risk-neutral put price turns out to be significantly more expensive.
The cryptocurrency options market is notable for its high volatility and lower liquidity compared to traditional markets. These characteristics introduce significant challenges to traditional option pricing methodologies. Addressing these complexities requires advanced models that can effectively capture the dynamics of the market. We explore which option pricing models are most effective in valuing cryptocurrency options. Specifically, we calibrate and evaluate the performance of the Black-Scholes, Merton Jump Diffusion, Variance Gamma, Kou, Heston, and Bates models. Our analysis focuses on pricing vanilla options on futures contracts for Bitcoin (BTC) and Ether (ETH). We find that the Black-Scholes model exhibits the highest pricing errors. In contrast, the Kou and Bates models achieve the lowest errors, with the Kou model performing the best for the BTC options and the Bates model for ETH options. The results highlight the importance of incorporating jumps and stochastic volatility into pricing models to better reflect the behavior of these assets.
We propose a quantum machine learning framework for approximating solutions to high-dimensional parabolic partial differential equations (PDEs) that can be reformulated as backward stochastic differential equations (BSDEs). In contrast to popular quantum-classical network hybrid approaches, this study employs the pure Variational Quantum Circuit (VQC) as the core solver without trainable classical neural networks. The quantum BSDE solver performs pathwise approximation via temporal discretization and Monte Carlo simulation, framed as model-based reinforcement learning. We benchmark VQCbased and classical deep neural network (DNN) solvers on two canonical PDEs as representatives: the Black-Scholes and nonlinear Hamilton-Jacobi-Bellman (HJB) equations. The VQC achieves lower variance and improved accuracy in most cases, particularly in highly nonlinear regimes and for out-of-themoney options, demonstrating greater robustness than DNNs. These results, obtained via quantum circuit simulation, highlight the potential of VQCs as scalable and stable solvers for highdimensional stochastic control problems.
We construct an aggregator for a family of Snell envelopes in a nondominated framework. We apply this construction to establish a robust hedging duality, along with the existence of a minimal hedging strategy, in a general semi-martingale setting for American-style options. Our results encompass continuous processes, or processes with jumps and non-vanishing diffusion. A key application is to financial market models, where uncertainty is quantified through the semi-martingale characteristics.
We present a Markovian market model driven by a hidden Brownian efficient price. In particular, we extend the queue-reactive model, making its dynamics dependent on the efficient price. Our study focuses on two sub-models: a signal-driven price model where the mid-price jump rates depend on the efficient price and an observable signal, and the usual queue-reactive model dependent on the efficient price via the intensities of the order arrivals. This way, we are able to correlate the evolution of limit order books of different stocks. We prove the stability of the observed mid-price around the efficient price under natural assumptions. Precisely, we show that at the macroscopic scale, prices behave as diffusions. We also develop a maximum likelihood estimation procedure for the model, and test it numerically. Our model is them used to backest trading strategies in a liquidation context.
We study an optimal execution strategy for purchasing a large block of shares over a fixed time horizon. The execution problem is subject to a general price impact that gradually dissipates due to market resilience. This resilience is modeled through a potentially arbitrary limit-order book shape. To account for liquidity dynamics, we introduce a stochastic volume effect governing the recovery of the deviation process, which represents the difference between the impacted and unaffected price. Additionally, we incorporate stochastic liquidity variations through a regime-switching Markov chain to capture abrupt shifts in market conditions. We study this singular control problem, where the trader optimally determines the timing and rate of purchases to minimize execution costs. The associated value function to this optimization problem is shown to satisfy a system of variational Hamilton-Jacobi-Bellman inequalities. Moreover, we establish that it is the unique viscosity solution to this HJB system and study the analytical properties of the free boundary separating the execution and continuation regions. To illustrate our results, we present numerical examples under different limit-order book configurations, highlighting the interplay between price impact, resilience dynamics, and stochastic liquidity regimes in shaping the optimal execution strategy.
We study S-shaped utility maximisation with VaR constraint and unobservable drift coefficient. Using the Bayesian filter, the concavification principle, and the change of measure, we give a semi-closed integral representation for the dual value function and find a critical wealth level that determines if the constrained problem admits a unique optimal solution and Lagrange multiplier or is infeasible. We also propose three algorithms (Lagrange, simulation, deep neural network) to solve the problem and compare their performances with numerical examples.
In April 2020, the Chicago Mercantile Exchange temporarily switched the pricing formula for West Texas Intermediate oil market options from the Black model to the Bachelier model. In this context, we introduce an Additive Bachelier model that provides a simple closed-form solution and a good description of the Implied volatility surface. This new Additive model exhibits several notable mathematical and financial properties. It ensures the no-arbitrage condition, a critical requirement in highly volatile markets, while also enabling a parsimonious synthesis of the volatility surface. The model features only three parameters, each one with a clear financial interpretation: the volatility term structure, vol-of-vol, and a parameter for modelling skew. The proposed model supports efficient pricing of path-dependent exotic options via Monte Carlo simulation, using a straightforward and computationally efficient approach. Its calibration process can follow a cascade calibration: first, it accurately replicates the term structures of forwards and At-The-Money volatilities observed in the market; second, it fits the smile of the volatility surface. Overall this model provides a robust and parsimonious description of the oil option market during the exceptionally volatile first period of the Covid-19 pandemic.
In cryptocurrency markets, a key challenge for perpetual future issuers is maintaining alignment between the perpetual future price and target value. This study addresses this challenge by exploring the relationship between funding rates and perpetual future prices. Our results demonstrate that by appropriately designing funding rates, the perpetual future price can remain aligned with the target value. We develop replicating portfolios for perpetual futures, offering issuers an effective method to hedge their positions. Additionally, we provide path-dependent funding rates as a practical alternative and investigate the difference between the original and path-dependent funding rates. To achieve these results, our study employs path-dependent infinite-horizon BSDEs in conjunction with arbitrage pricing theory. Our main results are obtained by establishing the existence and uniqueness of solutions to these BSDEs and analyzing the large-time behavior of these solutions.