Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
This paper presents a realistic simulated stock market where large language models (LLMs) act as heterogeneous competing trading agents. The open-source framework incorporates a persistent order book with market and limit orders, partial fills, dividends, and equilibrium clearing alongside agents with varied strategies, information sets, and endowments. Agents submit standardized decisions using structured outputs and function calls while expressing their reasoning in natural language. Three findings emerge: First, LLMs demonstrate consistent strategy adherence and can function as value investors, momentum traders, or market makers per their instructions. Second, market dynamics exhibit features of real financial markets, including price discovery, bubbles, underreaction, and strategic liquidity provision. Third, the framework enables analysis of LLMs' responses to varying market conditions, similar to partial dependence plots in machine-learning interpretability. The framework allows simulating financial theories without closed-form solutions, creating experimental designs that would be costly with human participants, and establishing how prompts can generate correlated behaviors affecting market stability.
This paper investigates optimal execution strategies in intraday energy markets through a mutually exciting Hawkes process model. Calibrated to data from the German intraday electricity market, the model effectively captures key empirical features, including intra-session volatility, distinct intraday market activity patterns, and the Samuelson effect as gate closure approaches. By integrating a transient price impact model with a bivariate Hawkes process to model the market order flow, we derive an optimal trading trajectory for energy companies managing large volumes, accounting for the specific trading patterns in these markets. A back-testing analysis compares the proposed strategy against standard benchmarks such as Time-Weighted Average Price (TWAP) and Volume-Weighted Average Price (VWAP), demonstrating substantial cost reductions across various hourly trading products in intraday energy markets.
This paper proposes a new algorithm -- Trading Graph Neural Network (TGNN) that can structurally estimate the impact of asset features, dealer features and relationship features on asset prices in trading networks. It combines the strength of the traditional simulated method of moments (SMM) and recent machine learning techniques -- Graph Neural Network (GNN). It outperforms existing reduced-form methods with network centrality measures in prediction accuracy. The method can be used on networks with any structure, allowing for heterogeneity among both traders and assets.
Maximizing revenue for grid-scale battery energy storage systems in continuous intraday electricity markets requires strategies that are able to seize trading opportunities as soon as new information arrives. This paper introduces and evaluates an automated high-frequency trading strategy for battery energy storage systems trading on the intraday market for power while explicitly considering the dynamics of the limit order book, market rules, and technical parameters. The standard rolling intrinsic strategy is adapted for continuous intraday electricity markets and solved using a dynamic programming approximation that is two to three orders of magnitude faster than an exact mixed-integer linear programming solution. A detailed backtest over a full year of German order book data demonstrates that the proposed dynamic programming formulation does not reduce trading profits and enables the policy to react to every relevant order book update, enabling realistic rapid backtesting. Our results show the significant revenue potential of high-frequency trading: our policy earns 58% more than when re-optimizing only once every hour and 14% more than when re-optimizing once per minute, highlighting that profits critically depend on trading speed. Furthermore, we leverage the speed of our algorithm to train a parametric extension of the rolling intrinsic, increasing yearly revenue by 8.4% out of sample.
We propose a stochastic game modelling the strategic interaction between market makers and traders of optimal execution type. For traders, the permanent price impact commonly attributed to them is replaced by quoting strategies implemented by market makers. For market makers, order flows become endogenous, driven by tactical traders rather than assumed exogenously. Using the forward-backward stochastic differential equation (FBSDE) characterization of Nash equilibria, we establish a local well-posedness result for the general game. In the specific Almgren-Chriss-Avellaneda-Stoikov model, a decoupling approach guarantees the global well-posedness of the FBSDE system via the well-posedness of an associated backward stochastic Riccati equation. Finally, by introducing small diffusion terms into the inventory processes, global well-posedness is achieved for the approximation game.
Public announcement dates are used in the green bond literature to measure equity market reactions to upcoming green bond issues. We find a sizeable number of green bond announcements were pre-dated by anonymous information leakages on the Bloomberg Terminal. From a candidate set of 2,036 'Bloomberg News' and 'Bloomberg First Word' headlines gathered between 2016 and 2022, we identify 259 instances of green bond-related information being released before being publicly announced by the issuing firm. These pre-announcement leaks significantly alter the equity trading dynamics of the issuing firms over intraday and daily event windows. Significant negative abnormal returns and increased trading volumes are observed following news leaks about upcoming green bond issues. These negative investor reactions are concentrated amongst financial firms, and leaks that arrive pre-market or early in market trading. We find equity price movements following news leaks can be explained to a greater degree than following public announcements. Sectoral differences are also observed in the key drivers behind investor reactions to green bond leaks by non-financials (Tobin's Q and free cash flow) and financials (ROA). Our results suggest that information leakages have a strong impact on market behaviour, and should be accounted for in green bond literature. Our findings also have broader ramifications for financial literature going forward. Privileged access to financially material information, courtesy of the ubiquitous use of Bloomberg Terminals by professional investors, highlights the need for event studies to consider wider sets of communication channels to confirm the date at which information first becomes available.
We find the equilibrium contract that an automated market maker (AMM) offers to their strategic liquidity providers (LPs) in order to maximize the order flow that gets processed by the venue. Our model is formulated as a leader-follower stochastic game, where the venue is the leader and a representative LP is the follower. We derive approximate closed-form equilibrium solutions to the stochastic game and analyze the reward structure. Our findings suggest that under the equilibrium contract, LPs have incentives to add liquidity to the pool only when higher liquidity on average attracts more noise trading. The equilibrium contract depends on the external price, the pool reference price, and the pool reserves. Our framework offers insights into AMM design for maximizing order flow while ensuring LP profitability.
Quantitative investment (quant) is an emerging, technology-driven approach in asset management, increasingy shaped by advancements in artificial intelligence. Recent advances in deep learning and large language models (LLMs) for quant finance have improved predictive modeling and enabled agent-based automation, suggesting a potential paradigm shift in this field. In this survey, taking alpha strategy as a representative example, we explore how AI contributes to the quantitative investment pipeline. We first examine the early stage of quant research, centered on human-crafted features and traditional statistical models with an established alpha pipeline. We then discuss the rise of deep learning, which enabled scalable modeling across the entire pipeline from data processing to order execution. Building on this, we highlight the emerging role of LLMs in extending AI beyond prediction, empowering autonomous agents to process unstructured data, generate alphas, and support self-iterative workflows.
Decentralised exchanges (DEXs) have transformed trading by enabling trustless, permissionless transactions, yet they face significant challenges such as impermanent loss and slippage, which undermine profitability for liquidity providers and traders. In this paper, we introduce QubitSwap, an innovative DEX model designed to tackle these issues through a hybrid approach that integrates an external oracle price with internal pool dynamics. This is achieved via a parameter $z$, which governs the balance between these price sources, creating a flexible and adaptive pricing mechanism. Through rigorous mathematical analysis, we derive a novel reserve function and pricing model that substantially reduces impermanent loss and slippage compared to traditional DEX frameworks. Notably, our results show that as $z$ approaches 1, slippage approaches zero, enhancing trading stability. QubitSwap marks a novel approach in DEX design, delivering a more efficient and resilient platform. This work not only advances the theoretical foundations of decentralised finance but also provides actionable solutions for the broader DeFi ecosystem.
This paper introduces a novel algorithm for generating realistic metaorders from public trade data, addressing a longstanding challenge in price impact research that has traditionally relied on proprietary datasets. Our method effectively recovers all established stylized facts of metaorders impact, such as the Square Root Law, the concave profile during metaorder execution, and the post-execution decay. This algorithm not only overcomes the dependence on proprietary data, a major barrier to research reproducibility, but also enables the creation of larger and more robust datasets that may increase the quality of empirical studies. Our findings strongly suggest that average realized short-term price impact is not due to information revelation (as in the Kyle framework) but has a mechanical origin which could explain the universality of the Square Root Law.
The article investigates the usage of Informer architecture for building automated trading strategies for high frequency Bitcoin data. Three strategies using Informer model with different loss functions: Root Mean Squared Error (RMSE), Generalized Mean Absolute Directional Loss (GMADL) and Quantile loss, are proposed and evaluated against the Buy and Hold benchmark and two benchmark strategies based on technical indicators. The evaluation is conducted using data of various frequencies: 5 minute, 15 minute, and 30 minute intervals, over the 6 different periods. Although the Informer-based model with Quantile loss did not outperform the benchmark, two other models achieved better results. The performance of the model using RMSE loss worsens when used with higher frequency data while the model that uses novel GMADL loss function is benefiting from higher frequency data and when trained on 5 minute interval it beat all the other strategies on most of the testing periods. The primary contribution of this study is the application and assessment of the RMSE, GMADL, and Quantile loss functions with the Informer model to forecast future returns, subsequently using these forecasts to develop automated trading strategies. The research provides evidence that employing an Informer model trained with the GMADL loss function can result in superior trading outcomes compared to the buy-and-hold approach.
We model the trading activity between a broker and her clients (informed and uninformed traders) as an infinite-horizon stochastic control problem. We derive the broker's optimal dealing strategy in closed form and use this to introduce an algorithm that bypasses the need to calibrate individual parameters, so the dealing strategy can be executed in real-world trading environments. Finally, we characterise the discount in the price of liquidity a broker offers clients. The discount strikes the optimal balance between maximising the order flow from the broker's clients and minimising adverse selection losses to the informed traders.
The paper analyses trade between the most developed economies of the world. The analysis is based on the previously proposed model of international trade. This model of international trade is based on the theory of general economic equilibrium. The demand for goods in this model is built on the import of goods by each of the countries participating in the trade. The structure of supply of goods in this model is determined by the structure of exports of each country. It is proved that in such a model, given a certain structure of supply and demand, there exists a so-called ideal equilibrium state in which the trade balance of each country is zero. Under certain conditions on the structure of supply and demand, there is an equilibrium state in which each country have a strictly positive trade balance. Among the equilibrium states under a certain structure of supply and demand, there are some that differ from the ones described above. Such states are characterized by the fact that there is an inequitable distribution of income between the participants in the trade. Such states are called degenerate. In this paper, based on the previously proposed model of international trade, an analysis of the dynamics of international trade of 8 of the world's most developed economies is made. It is shown that trade between these countries was not in a state of economic equilibrium. The found relative equilibrium price vector turned out to be very degenerate, which indicates the unequal exchange of goods on the market of the 8 studied countries. An analysis of the dynamics of supply to the market of the world's most developed economies showed an increase in China's share. The same applies to the share of demand.
Traditional Long Short-Term Memory (LSTM) networks are effective for handling sequential data but have limitations such as gradient vanishing and difficulty in capturing long-term dependencies, which can impact their performance in dynamic and risky environments like stock trading. To address these limitations, this study explores the usage of the newly introduced Extended Long Short Term Memory (xLSTM) network in combination with a deep reinforcement learning (DRL) approach for automated stock trading. Our proposed method utilizes xLSTM networks in both actor and critic components, enabling effective handling of time series data and dynamic market environments. Proximal Policy Optimization (PPO), with its ability to balance exploration and exploitation, is employed to optimize the trading strategy. Experiments were conducted using financial data from major tech companies over a comprehensive timeline, demonstrating that the xLSTM-based model outperforms LSTM-based methods in key trading evaluation metrics, including cumulative return, average profitability per trade, maximum earning rate, maximum pullback, and Sharpe ratio. These findings mark the potential of xLSTM for enhancing DRL-based stock trading systems.
We study optimal execution in markets with transient price impact in a competitive setting with $N$ traders. Motivated by prior negative results on the existence of pure Nash equilibria, we consider randomized strategies for the traders and whether allowing such strategies can restore the existence of equilibria. We show that given a randomized strategy, there is a non-randomized strategy with strictly lower expected execution cost, and moreover this de-randomization can be achieved by a simple averaging procedure. As a consequence, Nash equilibria cannot contain randomized strategies, and non-existence of pure equilibria implies non-existence of randomized equilibria. Separately, we also establish uniqueness of equilibria. Both results hold in a general transaction cost model given by a strictly positive definite impact decay kernel and a convex trading cost.
We study a multi-agent setting in which brokers transact with an informed trader. Through a sequential Stackelberg-type game, brokers manage trading costs and adverse selection with an informed trader. In particular, supplying liquidity to the informed traders allows the brokers to speculate based on the flow information. They simultaneously attempt to minimize inventory risk and trading costs with the lit market based on the informed order flow, also known as the internalization-externalization strategy. We solve in closed form for the trading strategy that the informed trader uses with each broker and propose a system of equations which classify the equilibrium strategies of the brokers. By solving these equations numerically we may study the resulting strategies in equilibrium. Finally, we formulate a competitive game between brokers in order to determine the liquidity prices subject to precommitment supplied to the informed trader and provide a numerical example in which the resulting equilibrium is not Pareto efficient.
We introduce the first formal model capturing the elicitation of unverifiable information from a party (the "source") with implicit signals derived by other players (the "observers"). Our model is motivated in part by applications in decentralized physical infrastructure networks (a.k.a. "DePIN"), an emerging application domain in which physical services (e.g., sensor information, bandwidth, or energy) are provided at least in part by untrusted and self-interested parties. A key challenge in these signal network applications is verifying the level of service that was actually provided by network participants. We first establish a condition called source identifiability, which we show is necessary for the existence of a mechanism for which truthful signal reporting is a strict equilibrium. For a converse, we build on techniques from peer prediction to show that in every signal network that satisfies the source identifiability condition, there is in fact a strictly truthful mechanism, where truthful signal reporting gives strictly higher total expected payoff than any less informative equilibrium. We furthermore show that this truthful equilibrium is in fact the unique equilibrium of the mechanism if there is positive probability that any one observer is unconditionally honest (e.g., if an observer were run by the network owner). Also, by extending our condition to coalitions, we show that there are generally no collusion-resistant mechanisms in the settings that we consider. We apply our framework and results to two DePIN applications: proving location, and proving bandwidth. In the location-proving setting observers learn (potentially enlarged) Euclidean distances to the source. Here, our condition has an appealing geometric interpretation, implying that the source's location can be truthfully elicited if and only if it is guaranteed to lie inside the convex hull of the observers.
Despite the growing attention to time series forecasting in recent years, many studies have proposed various solutions to address the challenges encountered in time series prediction, aiming to improve forecasting performance. However, effectively applying these time series forecasting models to the field of financial asset pricing remains a challenging issue. There is still a need for a bridge to connect cutting-edge time series forecasting models with financial asset pricing. To bridge this gap, we have undertaken the following efforts: 1) We constructed three datasets from the financial domain; 2) We selected over ten time series forecasting models from recent studies and validated their performance in financial time series; 3) We developed new metrics, msIC and msIR, in addition to MSE and MAE, to showcase the time series correlation captured by the models; 4) We designed financial-specific tasks for these three datasets and assessed the practical performance and application potential of these forecasting models in important financial problems. We hope the developed new evaluation suite, FinTSBridge, can provide valuable insights into the effectiveness and robustness of advanced forecasting models in finanical domains.