Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
We study welfare analysis for policy changes when supply behavior is only partially known. We augment the robust-demand approach of Kang and Vasserman (2025) with two supply primitives--intervals of feasible pass-through and conduct (market-power) parameters--applied to two equilibrium snapshots. A simple accounting identity distills the supply-side contribution to welfare to a simple integral expression. From there, we deduce that the bounds are produced by a single-threshold "bang-bang" inverse pass-through function. This, plus a modification of Kang and Vasserman's (2025) demand-side characterization, delivers simple bounds for consumer surplus, producer surplus, tax revenue, total surplus, and deadweight loss. We also study an ad valorem extension.
In this paper we provide a good overview of the problems and the background of mathematics education in Syrian schools. We aimed to study the effect of using popular mathematical puzzles on the mathematical thinking of schoolchildren, by conducting a paired experimental study (pre-test and post-test control group design) of the data we obtained through a sample taken from students of sixth-grade primary school students in Syria the Lady Mary School in Syria, in order to evaluate the extent of the impact of popular mathematical puzzles on students' ability to solve problems and mathematical skills, and then the skills were measured and the results were analyzed using a t-test as a tool for statistical analysis.
Blockchains face inherent limitations when communicating outside their own ecosystem, largely due to the Byzantine Fault Tolerant (BFT) 3f+1 security model. Trusted Execution Environments (TEEs) are a promising mitigation because they allow a single trusted broker to interface securely with external systems. This paper develops a cost-of-collusion principal-agent model for compromising a TEE in a Data Center Execution Assurance design. The model isolates the main drivers of attack profitability: a K-of-n coordination threshold, independent detection risk q, heterogeneous per-member sanctions F_i, and a short-window flow prize (omega) proportional to the value secured (beta times V). We derive closed-form deterrence thresholds and a conservative design bound (V_safe) that make collusion unprofitable under transparent parameter choices. Calibrations based on time-advantaged arbitrage indicate that plausible TEE parameters can protect on the order of one trillion dollars in value. The analysis informs the design of TEE-BFT, a blockchain architecture that combines BFT consensus with near-stateless TEEs, distributed key generation, and on-chain attestation to maintain security when interacting with external systems.
We study a model of the Fiscal Theory of the Price Level (FTPL) in a Bewley-Huggett-Aiyagari framework with heterogeneous agents. The model is set in continuous time, and ex post heterogeneity arises due to idiosyncratic, uninsurable income shocks. Such models have a natural interpretation as mean-field games, introduced by Huang, Caines, and Malham\'e and by Lasry and Lions. We highlight this connection and discuss the existence and multiplicity of stationary equilibria in models with and without capital. Our focus is on the mathematical analysis, and we prove the existence of two equilibria in which the government runs constant primary deficits, which in turn implies the existence of multiple price levels.
We show that in the context of exchange economies defined by aggregate excess demand functions on the full open price simplex, the generic economy has a finite number of equilibria. Genericicity is proved also for critical economies and, in both cases, in the strong sense that it holds for an open dense subset of economies in the Whitney topology. We use the concept of finite singularity type from singularity theory. This concept ensures that the number of equilibria of a map appear only in finite number. We then show that maps of finite singularity type make up an open and dense subset of all smooth maps and translate the result to the set of aggregate excess demand functions of an exchange economy. Along the way, we extend the classical results of Sonnenschein-Mantel-Debreu to aggregate excess demand functions defined on the full open price simplex, rather than just compact subsets of the simplex.
We study the problem of measuring the popularity of artists in music streaming platforms and the ensuing methods to compensate them (from the revenues platforms raise by charging users). We uncover the space of popularity indices upon exploring the implications of several axioms capturing principles with normative appeal. As a result, we characterize several families of indices. Some of them are intimately connected to the Shapley value, the central tool in cooperative game theory. Our characterizations might help to address the rising concern in the music industry to explore new methods that reward artists more appropriately. We actually connect our families to the new royalties models, recently launched by Spotify and Deezer.
A sender persuades a strategically naive decisionmaker (DM) by committing privately to an experiment. Sender's choice of experiment is unknown to the DM, who must form her posterior beliefs nonparametrically by applying some learning rule to an IID sample of (state, message) realizations. We show that, given mild regularity conditions, the empirical payoff functions hypo-converge to the full-information counterpart. This is sufficient to ensure that payoffs and optimal signals converge to the Bayesian benchmark. For finite sample sizes, the force of this "sampling friction" is nonmonotonic: it can induce more informative experiments than the Bayesian benchmark in settings like the classic Prosecutor-Judge game, and less revelation even in situations with perfectly aligned preferences. For many problems with state-independent preferences, we show that there is an optimal finite sample size for the DM. Although the DM would always prefer a larger sample for a fixed experiment, this result holds because the sample size affects sender's choice of experiment. Our results are robust to imperfectly informative feedback and the choice of learning rule.
Some well-known solutions for cooperative games with transferable utility (TU-games), such as the Banzhaf value, the Myerson value, and the Aumann-Dreze value, fail to satisfy efficiency, although they possess other desirable properties. This paper proposes a new approach to restore efficiency by extending any underlying solution to an efficient one, through what we call an efficient extension operator. We consider novel axioms for an efficient extension operator and characterize the egalitarian surplus sharing method and the proportional sharing method in a unified manner. These results can be considered as new justifications for the f-ESS values and the f-PS values introduced by Funaki and Koriyama (2025), which are generalizations of the equal surplus sharing value and the proportional sharing value. Our results offer an additional rationale for the values with an arbitrary underlying solution. As applications, we develop an efficient-fair extension of the solutions for the TU-games with communication networks and its variant for TU-games with coalition structures.
This paper covers a variety of mathematical folk puzzles, including geometric (Tangrams, dissection puzzles), logic, algebraic, probability (Monty Hall Problem, Birthday Paradox), and combinatorial challenges (Eight Queens Puzzle, Tower of Hanoi). It also explores modern modifications, such as digital and gamified approaches, to improve student involvement and comprehension. Furthermore, a novel concept, the "Minimal Dissection Path Problem for Polyominoes," is introduced and proven, demonstrating that the minimum number of straight-line cuts required to dissect a polyomino of N squares into its constituent units is $\mathrm{N}-1$. This problem, along with other puzzles, offers practical classroom applications that reinforce core mathematical concepts like area, spatial reasoning, and optimization, making learning both enjoyable and effective.
This paper analyzes the dynamic interaction between a fully rational, privately informed sender and a boundedly rational, uninformed receiver with memory constraints. The sender controls the flow of information, while the receiver designs a decision-making protocol, modeled as a finite-state machine, that governs how information is interpreted, how internal memory states evolve, and when and what decisions are made. The receiver must use the limited set of states optimally, both to learn and to create incentives for the sender to provide information. We show that behavior patterns such as information avoidance, opinion polarization, and indecision arise as equilibrium responses to asymmetric rationality. The model offers an expressive framework for strategic learning and decision-making in environments with cognitive and informational asymmetries, with applications to regulatory review and media distrust.
We study the effect of interim feedback policies in a dynamic all-pay auction where two players bid over two stages to win a common-value prize. We show that sequential equilibrium outcomes are characterized by Cheapest Signal Equilibria, wherein stage 1 bids are such that one player bids zero while the other chooses a cheapest bid consistent with some signal. Equilibrium payoffs for both players are always zero, and the sum of expected total bids equals the value of the prize. We conduct an experiment with four natural feedback policy treatments -- full, rank, and two cutoff policies -- and while the bidding behavior deviates from equilibrium, we fail to reject the hypothesis of no treatment effect on total bids. Further, stage 1 bids induce sunk costs and head starts, and we test for the resulting sunk cost and discouragement effects in stage 2 bidding.
The Synthetic Control method (SC) has become a valuable tool for estimating causal effects. Originally designed for single-treated unit scenarios, it has recently found applications in high-dimensional disaggregated settings with multiple treated units. However, challenges in practical implementation and computational efficiency arise in such scenarios. To tackle these challenges, we propose a novel approach that integrates the Multivariate Square-root Lasso method into the synthetic control framework. We rigorously establish the estimation error bounds for fitting the Synthetic Control weights using Multivariate Square-root Lasso, accommodating high-dimensionality and time series dependencies. Additionally, we quantify the estimation error for the Average Treatment Effect on the Treated (ATT). Through simulation studies, we demonstrate that our method offers superior computational efficiency without compromising estimation accuracy. We apply our method to assess the causal impact of COVID-19 Stay-at-Home Orders on the monthly unemployment rate in the United States at the county level.
In this paper, I develop a refinement of stability for matching markets with incomplete information. I introduce Information-Credible Pairwise Stability (ICPS), a solution concept in which deviating pairs can use credible, costly tests to reveal match-relevant information before deciding whether to block. By leveraging the option value of information, ICPS strictly refines Bayesian stability, rules out fear-driven matchings, and connects belief-based and information-based notions of stability. ICPS collapses to Bayesian stability when testing is uninformative or infeasible and coincides with complete-information stability when testing is perfect and free. I show that any ICPS-blocking deviation strictly increases total expected surplus, ensuring welfare improvement. I also prove that ICPS-stable allocations always exist, promote positive assortative matching, and are unique when the test power is sufficiently strong. The framework extends to settings with non-transferable utility, correlated types, and endogenous or sequential testing.
Our infrastructure systems enable our well-being by allowing us to move, store, and transform materials and information given considerable social and environmental variation. Critically, this ability is shaped by the degree to which society invests in infrastructure, a fundamentally political question in large public systems. There, infrastructure providers are distinguished from users through political processes, such as elections, and there is considerable heterogeneity among users. Previous political economic models have not taken into account (i) dynamic infrastructures, (ii) dynamic user preferences, and (iii) alternatives to rational actor theory. Meanwhile, engineering often neglects politics. We address these gaps with a general dynamic model of shared infrastructure systems that incorporates theories from political economy, social-ecological systems, and political psychology. We use the model to develop propositions on how multiple characteristics of the political process impact the robustness of shared infrastructure systems to capacity shocks and unequal opportunity for private infrastructure investment. Under user fees, inequality decreases robustness, but taxing private infrastructure use can increase robustness if non-elites have equal political influence. Election cycle periods have a nonlinear effect where increasing them increases robustness up to a point but decreases robustness beyond that point. Further, there is a negative relationship between the ideological sensitivity of candidates and robustness. Overall, the biases of voters and candidates (whether they favor tax increases or decreases) mediate these political-economic effects on robustness because biases may or may not match the reality of system needs (whether system recovery requires tax increases).
This paper develops a dual-channel framework for analyzing technology diffusion that integrates spatial decay mechanisms from continuous functional analysis with network contagion dynamics from spectral graph theory. Building on our previous studies, which establish Navier-Stokes-based approaches to spatial treatment effects and financial network fragility, we demonstrate that technology adoption spreads simultaneously through both geographic proximity and supply chain connections. Using comprehensive data on six technologies adopted by 500 firms over 2010-2023, we document three key findings. First, technology adoption exhibits strong exponential geographic decay with spatial decay rate $\kappa \approx 0.043$ per kilometer, implying a spatial boundary of $d^* \approx 69$ kilometers beyond which spillovers are negligible (R-squared = 0.99). Second, supply chain connections create technology-specific networks whose algebraic connectivity ($\lambda_2$) increases 300-380 percent as adoption spreads, with correlation between $\lambda_2$ and adoption exceeding 0.95 across all technologies. Third, traditional difference-in-differences methods that ignore spatial and network structure exhibit 61 percent bias in estimated treatment effects. An event study around COVID-19 reveals that network fragility increased 24.5 percent post-shock, amplifying treatment effects through supply chain spillovers in a manner analogous to financial contagion documented in our recent study. Our framework provides micro-foundations for technology policy: interventions have spatial reach of 69 kilometers and network amplification factor of 10.8, requiring coordinated geographic and supply chain targeting for optimal effectiveness.
Cooperative systems often remain in persistently suboptimal yet stable states. This paper explains such "rational stagnation" as an equilibrium sustained by a rational adversary whose utility follows the principle of potential loss, $u_{D} = U_{ideal} - U_{actual}$. Starting from the Prisoner's Dilemma, we show that the transformation $u_{i}' = a\,u_{i} + b\,u_{j}$ and the ratio of mutual recognition $w = b/a$ generate a fragile cooperation band $[w_{\min},\,w_{\max}]$ where both (C,C) and (D,D) are equilibria. Extending to a dynamic model with stochastic cooperative payoffs $R_{t}$ and intervention costs $(C_{c},\,C_{m})$, a Bellman-style analysis yields three strategic regimes: immediate destruction, rational stagnation, and intervention abandonment. The appendix further generalizes the utility to a reference-dependent nonlinear form and proves its stability under reference shifts, ensuring robustness of the framework. Applications to social-media algorithms and political trust illustrate how adversarial rationality can deliberately preserve fragility.
Rejections of positive offers in the Ultimatum Game have been attributed to different motivations. We show that a model combining social preferences and moral concerns provides a unifying explanation for these rejections while accounting for additional evidence. Under the preferences considered, a positive degree of spite is a necessary and sufficient condition for rejecting positive offers. This indicates that social preferences, rather than moral concerns, drive rejection behavior. This does not imply that moral concerns do not matter. We show that rejection thresholds increase with individuals' moral concerns, suggesting that morality acts as an amplifier of social preferences. Using data from van Leeuwen and Alger (2024), we estimate individuals' social preferences and moral concerns using a finite mixture approach. Consistent with previous evidence, we identify two types of individuals who reject positive offers in the Ultimatum Game, but that differ in their Dictator Game behavior.
The aim of this paper is to formulate and study a stochastic model for the management of environmental assets in a geographical context where in each place the local authorities take their policy decisions maximizing their own welfare, hence not cooperating each other. A key feature of our model is that the welfare depends not only on the local environmental asset, but also on the global one, making the problem much more interesting but technically much more complex to study, since strategic interaction among players arise. We study the problem first from the $N$-players game perspective and find open and closed loop Nash equilibria in explicit form. We also study the convergence of the $N$-players game (when $n\to +\infty$) to a suitable Mean Field Game whose unique equilibrium is exactly the limit of both the open and closed loop Nash equilibria found above, hence supporting their meaning for the game. Then we solve explicitly the problem from the cooperative perspective of the social planner and compare its solution to the equilibria of the $N$-players game. Moreover we find the Pigouvian tax which aligns the decentralized closed loop equilibrium to the social optimum.