Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
We introduce a decentralised, algorithmic framework for permissionless, multi-strategy capital allocation via tokenised, automated vaults. The system is designed to function analogously to a multi-strategy asset management company, but implemented entirely on-chain through a modular architecture comprising four interacting layers. The first, the capitalisation layer, composed of vaults that facilitate multi-asset deposits, tokenises investor participation, and specifies high level risk limits and admissible venues for deployment. The second, the strategy layer, enables the submission of strategies by human developers or autonomous agents, creating a decentralised marketplace governed by a validation mechanism incorporating adversarial and gamified elements. The third, the execution layer, operationalises strategy deployment using the host blockchain network's services. The fourth layer, the validated allocation layer, assesses and allocates capital among validated strategies, dynamically rebalancing toward those exhibiting superior risk-adjusted performance. In the framework, each admitted strategy acts as a manager for the "fund", encapsulated in a smart contract vault that issues transferable V-Tokens, conveying fractional ownership of the real-time portfolio operated by the vault. The system is designed to be open to participation by both human and AI agents, who collectively perform the roles of capital allocators, strategy developers, and validated allocators. The resulting structure is a self-regulating asset management ecosystem capable of decentralised, cooperative optimisation across traditional and digital financial domains. This framework is facilitated by a host chain network, which offers native automation and data oracle services enabling vault entities to autonomously operate on-chain, paving the way for being self sufficient in dynamic allocation of capital.
We consider islamic Profit and Loss (PL) sharing contract, possibly combined with an agency contract, and introduce the notion of {\em $c$-fair} profit sharing ratios ($c = (c_1, \ldots,c_d) \in (\mathbb R^{\star})^d$, where $d$ is the number of partners) which aims to determining both the profit sharing ratios and the induced expected maturity payoffs of each partner $\ell$ according to its contribution, determined by the rate component $c_{\ell}$ of the vector $c$, to the global success of the project. We show several new results that elucidate the relation between these profit sharing ratios and various important economic factors as the investment risk, the labor and the capital, giving accordingly a way of choosing them in connection with the real economy. The design of our approach allows the use of all the range of econometrics models or more general stochastic diffusion models to compute or approximate the quantities of interest.
The proliferation of artificial intelligence (AI) in financial services has prompted growing demand for tools that can systematically detect AI-related disclosures in corporate filings. While prior approaches often rely on keyword expansion or document-level classification, they fall short in granularity, interpretability, and robustness. This study introduces FinAI-BERT, a domain-adapted transformer-based language model designed to classify AI-related content at the sentence level within financial texts. The model was fine-tuned on a manually curated and balanced dataset of 1,586 sentences drawn from 669 annual reports of U.S. banks (2015 to 2023). FinAI-BERT achieved near-perfect classification performance (accuracy of 99.37 percent, F1 score of 0.993), outperforming traditional baselines such as Logistic Regression, Naive Bayes, Random Forest, and XGBoost. Interpretability was ensured through SHAP-based token attribution, while bias analysis and robustness checks confirmed the model's stability across sentence lengths, adversarial inputs, and temporal samples. Theoretically, the study advances financial NLP by operationalizing fine-grained, theme-specific classification using transformer architectures. Practically, it offers a scalable, transparent solution for analysts, regulators, and scholars seeking to monitor the diffusion and framing of AI across financial institutions.
Large Language Models (LLMs) have been employed in financial decision making, enhancing analytical capabilities for investment strategies. Traditional investment strategies often utilize quantitative models, fundamental analysis, and technical indicators. However, LLMs have introduced new capabilities to process and analyze large volumes of structured and unstructured data, extract meaningful insights, and enhance decision-making in real-time. This survey provides a structured overview of recent research on LLMs within the financial domain, categorizing research contributions into four main frameworks: LLM-based Frameworks and Pipelines, Hybrid Integration Methods, Fine-Tuning and Adaptation Approaches, and Agent-Based Architectures. This study provides a structured review of recent LLMs research on applications in stock selection, risk assessment, sentiment analysis, trading, and financial forecasting. By reviewing the existing literature, this study highlights the capabilities, challenges, and potential directions of LLMs in financial markets.
The emergence of Open Banking represents a significant shift in financial data management, influencing financial institutions' market dynamics and marketing strategies. This increased competition creates opportunities and challenges, as institutions manage data inflow to improve products and services while mitigating data outflow that could aid competitors. This study introduces a framework to predict customers' propensity to share data via Open Banking and interprets this behavior through Explanatory Model Analysis (EMA). Using data from a large Brazilian financial institution with approximately 3.2 million customers, a hybrid data balancing strategy incorporating ADASYN and NEARMISS techniques was employed to address the infrequency of data sharing and enhance the training of XGBoost models. These models accurately predicted customer data sharing, achieving 91.39% accuracy for inflow and 91.53% for outflow. The EMA phase combined the Shapley Additive Explanations (SHAP) method with the Classification and Regression Tree (CART) technique, revealing the most influential features on customer decisions. Key features included the number of transactions and purchases in mobile channels, interactions within these channels, and credit-related features, particularly credit card usage across the national banking system. These results highlight the critical role of mobile engagement and credit in driving customer data-sharing behaviors, providing financial institutions with strategic insights to enhance competitiveness and innovation in the Open Banking environment.
We introduce a new microeconomic model of horizontal differentiation that unifies and extends previous developments inspired by the seminal work of Hotelling (1929). Our framework incorporates boundedly rational consumers, an unlimited number of firms, and arbitrary differentiation spaces with Riemannian manifolds. We argue that Riemannian geometry provides a natural and powerful tool for analyzing such models, offering fresh insights into firm behavior and market structure with complex products.
Passive investing has gained immense popularity due to its low fees and the perceived simplicity of focusing on zero tracking error, rather than security selection. However, our analysis shows that the passive (zero tracking error) approach of waiting until the market close on the day of index reconstitution to purchase a stock (that was announced days earlier as an upcoming addition) results in costs amounting to hundreds of basis points compared to strategies that involve gradually acquiring a small portion of the required shares in advance with minimal additional tracking errors. In addition, we show that under all scenarios analyzed, a trader who builds a small inventory post-announcement and provides liquidity at the reconstitution event can consistently earn several hundreds of basis points in profit and often much more, assuming minimal risk.
This paper integrates Austrian capital theory with repeated game theory to examine strategic miner behaviour under different institutional conditions in blockchain systems. It shows that when protocol rules are mutable, effective time preference rises, undermining rational long-term planning and cooperative equilibria. Using formal game-theoretic analysis and Austrian economic principles, the paper demonstrates how mutable protocols shift miner incentives from productive investment to political rent-seeking and influence games. The original Bitcoin protocol is interpreted as an institutional anchor: a fixed rule-set enabling calculability and low time preference. Drawing on the work of Bohm-Bawerk, Mises, and Hayek, the argument is made that protocol immutability is essential for restoring strategic coherence, entrepreneurial confidence, and sustainable network equilibrium.
The paper summarizes key results of the benchmark approach with a focus on the concept of benchmark-neutral pricing. It applies these results to the pricing of an extreme-maturity European put option on a well-diversified stock index. The growth optimal portfolio of the stocks is approximated by a well-diversified stock portfolio and modeled by a drifted time-transformed squared Bessel process of dimension four. It is shown that the benchmark-neutral price of a European put option is theoretically the minimal possible price and the respective risk-neutral put price turns out to be significantly more expensive.
Purpose: Financial service companies manage huge volumes of data which requires timely error identification and resolution. The associated tasks to resolve these errors frequently put financial analyst workforces under significant pressure leading to resourcing challenges and increased business risk. To address this challenge, we introduce a formal task allocation model which considers both business orientated goals and analyst well-being. Methodology: We use a Genetic Algorithm (GA) to optimise our formal model to allocate and schedule tasks to analysts. The proposed solution is able to allocate tasks to analysts with appropriate skills and experience, while taking into account staff well-being objectives. Findings: We demonstrate our GA model outperforms baseline heuristics, current working practice, and is applicable to a range of single and multi-objective real-world scenarios. We discuss the potential for metaheuristics (such as GAs) to efficiently find sufficiently good allocations which can provide recommendations for financial service managers in-the-loop. Originality: A key gap in existing allocation and scheduling models, is fully considering worker well-being. This paper presents an allocation model which explicitly optimises for well-being while still improving on current working practice for efficiency.
This paper explores stochastic control models in the context of decarbonization within the energy market. We study three progressively complex scenarios: (1) a single firm operating with two technologies-one polluting and one clean,(2)two firms model and (3) two firms without any regulatory incentive. For each setting, we formulate the corresponding stochastic control problem and characterize the firms' optimal strategies in terms of investment and production. The analysis highlights the strategic interactions between firms and the role of incentives in accelerating the transition to cleaner technologies.
We propose \textit{OpenAlpha}, a community-led strategy validation framework for decentralised capital management on a host blockchain network, which integrates game-theoretic validation, adversarial auditing, and market-based belief aggregation. This work formulates treasury deployment as a capital optimisation problem under verification costs and strategic misreporting, and operationalises it through a decision waterfall that sequences intention declaration, strategy proposal, prediction-market validation, dispute resolution, and capital allocation. Each phase of this framework's validation process embeds economic incentives to align proposer, verifier, and auditor behaviour, producing confidence scores that may feed into a capital allocation rule. While OpenAlpha is designed for capital strategy assessment, its validation mechanisms are composable and extend naturally to evaluating external decentralised applications (DApps), enabling on-chain scrutiny of DApp performance, reliability, and integration risk. This architecture allows for adaptive, trust-minimised capital deployment without reliance on centralised governance or static audits.
Recognizing the importance of jump risk in option pricing, we propose a neural jump stochastic differential equation model in this paper, which integrates neural networks as parameter estimators in the conventional jump diffusion model. To overcome the problem that the backpropagation algorithm is not compatible with the jump process, we use the Gumbel-Softmax method to make the jump parameter gradient learnable. We examine the proposed model using both simulated data and S&P 500 index options. The findings demonstrate that the incorporation of neural jump components substantially improves the accuracy of pricing compared to existing benchmark models.
In competitive supply chains (SCs), pricing decisions are crucial, as they directly impact market share and profitability. Traditional SC models often assume continuous pricing for mathematical convenience, overlooking the practical reality of discrete price increments driven by currency constraints. Additionally, customer behavior, influenced by loyalty and strategic considerations, plays a significant role in purchasing decisions. To address these gaps, this study examines a SC model involving one supplier and two manufacturers, incorporating realistic factors such as customer demand segmentation and discrete price setting. Our analysis shows that the Nash equilibria (NE) among manufacturers are not unique, we then discuss the focal equilibrium. Our analysis also reveals that low denomination factors can lead to instability as the corresponding game does not have NE. Numerical simulations demonstrate that even small changes in price increments significantly affect the competitive dynamics and market share distribution.
In this analysis we determine factors driving the cross-sectional variation in uninsured deposits during the interest rate raising cycle of 2022 to 2023. The goal of our analysis is to determine whether banks proactively managed deposit run risk prior to the hiking cycle which produced the 2023 Regional Banking Crisis. We find evidence that interest rate forward, futures, and swap use affected the change in a bank uninsured deposits over the period. Interest rate option use, however, has no effect on the change in uninsured deposits. Similarly, bank equity levels were uncorrelated with uninsured deposit changes. We conclude we find no evidence of banks managing run risk via their balance sheet prior to the 2023 Regional Banking Crisis.
Enterprise Resource Planning (ERP) systems serve as the digital backbone of modern financial institutions, yet they continue to rely on static, rule-based workflows that limit adaptability, scalability, and intelligence. As business operations grow more complex and data-rich, conventional ERP platforms struggle to integrate structured and unstructured data in real time and to accommodate dynamic, cross-functional workflows. In this paper, we present the first AI-native, agent-based framework for ERP systems, introducing a novel architecture of Generative Business Process AI Agents (GBPAs) that bring autonomy, reasoning, and dynamic optimization to enterprise workflows. The proposed system integrates generative AI with business process modeling and multi-agent orchestration, enabling end-to-end automation of complex tasks such as budget planning, financial reporting, and wire transfer processing. Unlike traditional workflow engines, GBPAs interpret user intent, synthesize workflows in real time, and coordinate specialized sub-agents for modular task execution. We validate the framework through case studies in bank wire transfers and employee reimbursements, two representative financial workflows with distinct complexity and data modalities. Results show that GBPAs achieve up to 40% reduction in processing time, 94% drop in error rate, and improved regulatory compliance by enabling parallelism, risk control insertion, and semantic reasoning. These findings highlight the potential of GBPAs to bridge the gap between generative AI capabilities and enterprise-grade automation, laying the groundwork for the next generation of intelligent ERP systems.
We report the results of a study to identify and quantify drivers of inventory record inaccuracy (IRI) in a grocery retailing environment, a context where products are often subject to promotion activity and a substantial share of items are perishable. The analysis covers ~24,000 stock keeping units (SKUs) sold in 11 stores. We find that IRI is positively associated with average inventory level, restocking frequency, and whether the item is perishable, and negatively associated with promotional activity. We also conduct a field quasi-experiment to assess the marginal effect of stockcounts on sales. While performing an inventory audit is found to lead to an 11% store-wide sales lift, the audit has heterogeneous effects with all the sales lift concentrated on items exhibiting negative IRI (i.e., where system inventory is greater than actual inventory). The benefits of inventory audits are also found to be more pronounced on perishable items, that are associated with higher IRI levels. Our findings inform retailers on the appropriate allocation of effort to improve IRI and reframes stock counting as a sales-increasing strategy rather than a cost-intensive necessity.
We present a possible approach to measuring inequality in a system of coupled Fokker-Planck-type equations that describe the evolution of distribution densities for two populations interacting pairwise due to social and/or economic factors. The macroscopic dynamics of their mean values follow a Lotka-Volterra system of ordinary differential equations. Unlike classical models of wealth and opinion formation, which tend to converge toward a steady-state profile, the oscillatory behavior of these densities only leads to the formation of local equilibria within the Fokker-Planck system. This makes tracking the evolution of most inequality measures challenging. However, an insightful perspective on the problem is obtained by using the coefficient of variation, a simple inequality measure closely linked to the Gini index. Numerical experiments confirm that, despite the system's oscillatory nature, inequality initially tends to decrease.