Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Financial news plays a critical role in the information diffusion process in financial markets and is a known driver of stock prices. However, the information in each news article is not necessarily self-contained, often requiring a broader understanding of the historical news coverage for accurate interpretation. Further, identifying and incorporating the most relevant contextual information presents significant challenges. In this work, we explore the value of historical context in the ability of large language models to understand the market impact of financial news. We find that historical context provides a consistent and significant improvement in performance across methods and time horizons. To this end, we propose an efficient and effective contextualization method that uses a large LM to process the main article, while a small LM encodes the historical context into concise summary embeddings that are then aligned with the large model's representation space. We explore the behavior of the model through multiple qualitative and quantitative interpretability tests and reveal insights into the value of contextualization. Finally, we demonstrate that the value of historical context in model predictions has real-world applications, translating to substantial improvements in simulated investment performance.
Constructing the implied volatility surface (IVS) is reframed as a meta-learning problem training across trading days to learn a general process that reconstructs a full IVS from few quotes, eliminating daily recalibration. We introduce the Volatility Neural Process, an attention-based model that uses a two-stage training: pre-training on SABR-generated surfaces to encode a financial prior, followed by fine-tuning on market data. On S&P 500 options (2006-2023; out-of-sample 2019-2023), our model outperforms SABR, SSVI, Gaussian Process, and an ablation trained only on real data. Relative to the ablation, the SABR-induced prior reduces RMSE by about 40% and dominates in mid- and long-maturity regions where quotes are sparse. The learned prior suppresses large errors, providing a practical, data-efficient route to stable IVS construction with a single deployable model.
In recent years, China's bond market has seen a surge in defaults amid regulatory reforms and macroeconomic volatility. Traditional machine learning models struggle to capture financial data's irregularity and temporal dependencies, while most deep learning models lack interpretability-critical for financial decision-making. To tackle these issues, we propose EMDLOT (Explainable Multimodal Deep Learning for Time-series), a novel framework for multi-class bond default prediction. EMDLOT integrates numerical time-series (financial/macroeconomic indicators) and unstructured textual data (bond prospectuses), uses Time-Aware LSTM to handle irregular sequences, and adopts soft clustering and multi-level attention to boost interpretability. Experiments on 1994 Chinese firms (2015-2024) show EMDLOT outperforms traditional (e.g., XGBoost) and deep learning (e.g., LSTM) benchmarks in recall, F1-score, and mAP, especially in identifying default/extended firms. Ablation studies validate each component's value, and attention analyses reveal economically intuitive default drivers. This work provides a practical tool and a trustworthy framework for transparent financial risk modeling.
Financial time series forecasting is both highly significant and challenging. Previous approaches typically standardized time series data before feeding it into forecasting models, but this encoding process inherently leads to a loss of important information. Moreover, past time series models generally require fixed numbers of variables or lookback window lengths, which further limits the scalability of time series forecasting. Besides, the interpretability and the uncertainty in forecasting remain areas requiring further research, as these factors directly impact the reliability and practical value of predictions. To address these issues, we first construct a diverse financial image-text dataset (FVLDB) and develop the Uncertainty-adjusted Group Relative Policy Optimization (UARPO) method to enable the model not only output predictions but also analyze the uncertainty of those predictions. We then proposed FinZero, a multimodal pre-trained model finetuned by UARPO to perform reasoning, prediction, and analytical understanding on the FVLDB financial time series. Extensive experiments validate that FinZero exhibits strong adaptability and scalability. After fine-tuning with UARPO, FinZero achieves an approximate 13.48\% improvement in prediction accuracy over GPT-4o in the high-confidence group, demonstrating the effectiveness of reinforcement learning fine-tuning in multimodal large model, including in financial time series forecasting tasks.
Simulating realistic financial time series is essential for stress testing, scenario generation, and decision-making under uncertainty. Despite advances in deep generative models, there is no consensus metric for their evaluation. We focus on generative AI for financial time series in decision-making applications and employ the nested optimal transport distance, a time-causal variant of optimal transport distance, which is robust to tasks such as hedging, optimal stopping, and reinforcement learning. Moreover, we propose a statistically consistent, naturally parallelizable algorithm for its computation, achieving substantial speedups over existing approaches.
We present a deep learning framework for pricing options based on market-implied volatility surfaces. Using end-of-day S\&P 500 index options quotes from 2018-2023, we construct arbitrage-free volatility surfaces and generate training data for American puts and arithmetic Asian options using QuantLib. To address the high dimensionality of volatility surfaces, we employ a variational autoencoder (VAE) that compresses volatility surfaces across maturities and strikes into a 10-dimensional latent representation. We feed these latent variables, combined with option-specific inputs such as strike and maturity, into a multilayer perceptron to predict option prices. Our model is trained in stages: first to train the VAE for volatility surface compression and reconstruction, then options pricing mapping, and finally fine-tune the entire network end-to-end. The trained pricer achieves high accuracy across American and Asian options, with prediction errors concentrated primarily near long maturities and at-the-money strikes, where absolute bid-ask price differences are known to be large. Our method offers an efficient and scalable approach requiring only a single neural network forward pass and naturally improve with additional data. By bridging volatility surface modeling and option pricing in a unified framework, it provides a fast and flexible alternative to traditional numerical approaches for exotic options.
We study the problem of designing and hedging unit-linked life policies whose benefits depend on an investment fund that incorporates environmental criteria in its selection process. Offering these products poses two key challenges: constructing a green investment fund and developing a hedging strategy for policies written on that fund. We address these two problems separately. First, we design a portfolio selection rule driven by firms' carbon intensity that endogenously selects assets and avoids ad hoc pre-screens based on ESG scores. The effectiveness of our new portfolio selection method is tested using real market data. Second, we adopt the perspective of an insurance company issuing unit-linked policies written on this fund. Such contracts are exposed to market, carbon, and mortality risk, which the insurer seeks to hedge. Due to market incompleteness, we address the hedging problem via a quadratic approach aimed at minimizing the tracking error. We also make a numerical analysis to assess the performance of the hedging strategy. For our simulation study, we use an efficient weak second-order scheme that allows for variance reduction.
We propose a new pseudo-Siamese Network for Asset Pricing (SNAP) model, based on deep learning approaches, for conditional asset pricing. Our model allows for the deep alpha, deep beta and deep factor risk premia conditional on high dimensional observable information of financial characteristics and macroeconomic states, while storing the long-term dependency of the informative features through long short-term memory network. We apply this method to monthly U.S. stock returns from 1970-2019 and find that our pseudo-SNAP model outperforms the benchmark approaches in terms of out-of-sample prediction and out-of-sample Sharpe ratio. In addition, we also apply our method to calculate deep mispricing errors which we use to construct an arbitrage portfolio K-Means clustering. We find that the arbitrage portfolio has significant alphas.
This paper presents a deep generative modeling framework for controllably synthesizing implied volatility surfaces (IVSs) using a variational autoencoder (VAE). Unlike conventional data-driven models, our approach provides explicit control over meaningful shape features (e.g., volatility level, slope, curvature, term-structure) to generate IVSs with desired characteristics. In our framework, financially interpretable shape features are disentangled from residual latent factors. The target features are embedded into the VAE architecture as controllable latent variables, while the residual latent variables capture additional structure to preserve IVS shape diversity. To enable this control, IVS feature values are quantified via regression at an anchor point and incorporated into the decoder to steer generation. Numerical experiments demonstrate that the generative model enables rapid generation of realistic IVSs with desired features rather than arbitrary patterns, and achieves high accuracy across both single- and multi-feature control settings. For market validity, an optional post-generation latent-space repair algorithm adjusts only the residual latent variables to remove occasional violations of static no-arbitrage conditions without altering the specified features. Compared with black-box generators, the framework combines interpretability, controllability, and flexibility for synthetic IVS generation and scenario design.
This paper investigates whether artificial intelligence can enhance stock clustering compared to traditional methods. We consider this in the context of the semi-strong Efficient Markets Hypothesis (EMH), which posits that prices fully reflect all public information and, accordingly, that clusters based on price information cannot be improved upon. We benchmark three clustering approaches: (i) price-based clusters derived from historical return correlations, (ii) human-informed clusters defined by the Global Industry Classification Standard (GICS), and (iii) AI-driven clusters constructed from large language model (LLM) embeddings of stock-related news headlines. At each date, each method provides a classification in which each stock is assigned to a cluster. To evaluate a clustering, we transform it into a synthetic factor model following the Arbitrage Pricing Theory (APT) framework. This enables consistent evaluation of predictive performance in a roll forward, out-of-sample test. Using S&P 500 constituents from from 2022 through 2024, we find that price-based clustering consistently outperforms both rule-based and AI-based methods, reducing root mean squared error (RMSE) by 15.9% relative to GICS and 14.7% relative to LLM embeddings. Our contributions are threefold: (i) a generalizable methodology that converts any equity grouping: manual, machine, or market-driven, into a real-time factor model for evaluation; (ii) the first direct comparison of price-based, human rule-based, and AI-based clustering under identical conditions; and (iii) empirical evidence reinforcing that short-horizon return information is largely contained in prices. These results support the EMH while offering practitioners a practical diagnostic for monitoring evolving sector structures and provide academics a framework for testing alternative hypotheses about how quickly markets absorb information.
In this work we show how generative tools, which were successfully applied to limit order book data, can be utilized for the task of imitating trading agents. To this end, we propose a modified generative architecture based on the state-space model, and apply it to limit order book data with identified investors. The model is trained on synthetic data, generated from a heterogeneous agent-based model. Finally, we compare model's predicted distribution over different aspects of investors' actions, with the ground truths known from the agent-based model.
Bayesian inference is widely used in many different fields to test hypotheses against observations. In most such applications, an assumption is made of precise input values to produce a precise output value. However, this is unrealistic for real-world applications. Often the best available information from subject matter experts (SMEs) in a given field is interval range estimates of the input probabilities involved in Bayes Theorem. This paper provides two key contributions to extend Bayes Theorem to an interval type-2 (IT2) version. First, we develop an IT2 version of Bayes Theorem that uses a novel and conservative method to avoid potential inconsistencies in the input IT2 MFs that otherwise might produce invalid output results. We then describe a novel and flexible algorithm for encoding SME-provided intervals into IT2 fuzzy membership functions (MFs), which we can use to specify the input probabilities in Bayes Theorem. Our algorithm generalizes and extends previous work on this problem that primarily addressed the encoding of intervals into word MFs for Computing with Words applications.
In this paper we consider how we can include index options in enhanced indexation. We present the concept of an \enquote{option strategy} which enables us to treat options as an artificial asset. An option strategy for a known set of options is a specified set of rules which detail how these options are to be traded (i.e.~bought, rolled over, sold) depending upon market conditions. We consider option strategies in the context of enhanced indexation, but we discuss how they have much wider applicability in terms of portfolio optimisation. We use an enhanced indexation approach based on second-order stochastic dominance. We consider index options for the S\&P~500, using a dataset of daily stock prices over the period 2017-2025 that has been manually adjusted to account for survivorship bias. This dataset is made publicly available for use by future researchers. Our computational results indicate that introducing option strategies in an enhanced indexation setting offers clear benefits in terms of improved out-of-sample performance. This applies whether we use equities or an exchange-traded fund as part of the enhanced indexation portfolio.
There are multiple explanations for stylized facts in high-frequency trading, including adaptive and informed agents, many of which have been studied through agent-based models. This paper investigates an alternative explanation by examining whether, and under what circumstances, interactions between traders placing limit order book messages can reproduce stylized facts, and what forms of interaction are required. While the agent-based modeling literature has introduced interconnected agents on networks, little attention has been paid to whether specific trading network topologies can generate stylized facts in limit order book markets. In our model, agents are strictly zero-intelligence, with no fundamental knowledge or chartist-like strategies, so that the role of network topology can be isolated. We find that scale-free connectivity between agents reproduces stylized facts observed in markets, whereas no-interaction does not. Our experiments show that regular lattices and Erdos-Renyi networks are not significantly different from the no-interaction baseline. Thus, we provide a completely new, potentially complementary, explanation for the emergence of stylized facts.
In the highly volatile and uncertain global financial markets, traditional quantitative trading models relying on statistical modeling or empirical rules often fail to adapt to dynamic market changes and black swan events due to rigid assumptions and limited generalization. To address these issues, this paper proposes QTMRL (Quantitative Trading Multi-Indicator Reinforcement Learning), an intelligent trading agent combining multi-dimensional technical indicators with reinforcement learning (RL) for adaptive and stable portfolio management. We first construct a comprehensive multi-indicator dataset using 23 years of S&P 500 daily OHLCV data (2000-2022) for 16 representative stocks across 5 sectors, enriching raw data with trend, volatility, and momentum indicators to capture holistic market dynamics. Then we design a lightweight RL framework based on the Advantage Actor-Critic (A2C) algorithm, including data processing, A2C algorithm, and trading agent modules to support policy learning and actionable trading decisions. Extensive experiments compare QTMRL with 9 baselines (e.g., ARIMA, LSTM, moving average strategies) across diverse market regimes, verifying its superiority in profitability, risk adjustment, and downside risk control. The code of QTMRL is publicly available at https://github.com/ChenJiahaoJNU/QTMRL.git
Financial time-series forecasting is critical for maintaining economic stability, guiding informed policymaking, and promoting sustainable investment practices. However, it remains challenging due to various underlying pattern shifts. These shifts arise primarily from three sources: temporal non-stationarity (distribution changes over time), multi-domain diversity (distinct patterns across financial domains such as stocks, commodities, and futures), and varying temporal resolutions (patterns differing across per-second, hourly, daily, or weekly indicators). While recent deep learning methods attempt to address these complexities, they frequently suffer from overfitting and typically require extensive domain-specific fine-tuning. To overcome these limitations, we introduce FinCast, the first foundation model specifically designed for financial time-series forecasting, trained on large-scale financial datasets. Remarkably, FinCast exhibits robust zero-shot performance, effectively capturing diverse patterns without domain-specific fine-tuning. Comprehensive empirical and qualitative evaluations demonstrate that FinCast surpasses existing state-of-the-art methods, highlighting its strong generalization capabilities.
This study investigates the pretrained RNN attention models with the mainstream attention mechanisms such as additive attention, Luong's three attentions, global self-attention (Self-att) and sliding window sparse attention (Sparse-att) for the empirical asset pricing research on top 420 large-cap US stocks. This is the first paper on the large-scale state-of-the-art (SOTA) attention mechanisms applied in the asset pricing context. They overcome the limitations of the traditional machine learning (ML) based asset pricing, such as mis-capturing the temporal dependency and short memory. Moreover, the enforced causal masks in the attention mechanisms address the future data leaking issue ignored by the more advanced attention-based models, such as the classic Transformer. The proposed attention models also consider the temporal sparsity characteristic of asset pricing data and mitigate potential overfitting issues by deploying the simplified model structures. This provides some insights for future empirical economic research. All models are examined in three periods, which cover pre-COVID-19 (mild uptrend), COVID-19 (steep uptrend with a large drawdown) and one year post-COVID-19 (sideways movement with high fluctuations), for testing the stability of these models under extreme market conditions. The study finds that in value-weighted portfolio back testing, Model Self-att and Model Sparse-att exhibit great capabilities in deriving the absolute returns and hedging downside risks, while they achieve an annualized Sortino ratio of 2.0 and 1.80 respectively in the period with COVID-19. And Model Sparse-att performs more stably than Model Self-att from the perspective of absolute portfolio returns with respect to the size of stocks' market capitalization.
We describe a Matlab routine that allows us to estimate the jumps in financial asset prices using the Threshold (or Truncation) method of Mancini (2009). The routine is designed for application to five-minute log-returns. The underlying assumption is that asset prices evolve in time following an Ito semimartingale with, possibly stochastic, volatility and jumps. A log-return is likely to contain a jump if its absolute value is larger than a threshold determined by the maximum increment of the Brownian semimartingale part. The latter is particularly sensitive to the magnitude of the volatility coefficient, and from an empirical point of view, volatility levels typically depend on the time of day (TOD), with volatility being highest at the beginning and end of the day, while it is low in the middle. The first routine presented allows for an estimation of the TOD effect, and is an implementation of the method described in Bollerslev and Todorov (2011). Subsequently, the TOD effect for the stock Apple Inc. (AAPL) is visualized. The second routine presented is an implementation of the threshold method for estimating jumps in AAPL prices. The procedure recursively estimates daily volatility and jumps. In each round, the threshold depends on the time of the day and is constructed using the estimate of the daily volatility multiplied by the daytime TOD factor and by the continuity modulus of the Brownian motion paths. Once the jumps are detected, the daily volatility estimate is updated using only the log-returns not containing jumps. Before application to empirical data, the reliability of the procedure was separately tested on simulated asset prices. The results obtained on a record of AAPL stock prices are visualized.