Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
An employer contracts with a worker to incentivize efforts whose productivity depends on ability; the worker then enters a market that pays him contingent on ability evaluation. With non-additive monitoring technology, the interdependence between market expectations and worker efforts can lead to multiple equilibria (contrasting Holmstrom (1982/1999); Gibbons and Murphy (1992)). We identify a sufficient and necessary criterion for the employer to face such strategic uncertainty--one linked to skill-effort complementarity, a pervasive feature of labor markets. To fully implement work, the employer optimally creates private wage discrimination to iteratively eliminate pessimistic market expectations and low worker efforts. Our result suggests that present contractual privacy, employers' coordination motives generate within-group pay inequality. The comparative statics further explain several stylized facts about residual wage dispersion.
An agent is a misspecified Bayesian if she updates her belief using Bayes' rule given a subjective, possibly misspecified model of her signals. This paper shows that a belief sequence is consistent with misspecified Bayesianism if the prior contains a grain of the average posterior, i.e., is a mixture of the average posterior and another distribution. A partition-based variant of the grain condition is both necessary and sufficient. Under correct specification, the grain condition reduces to the usual Bayes plausibility. The condition imposes no restriction on the posterior given a full-support prior over a finite or compact state space. However, it rules out posteriors that have heavier tails than the prior on unbounded state spaces. The results cast doubt on the feasibility of testing Bayesian updating in many environments. They also suggest that many seemingly non-Bayesian updating rules are observationally equivalent to Bayesian updating under misspecified beliefs.
This paper proposes normative criteria for voting rules under uncertainty about individual preferences. The criteria emphasize the importance of responsiveness, i.e., the probability that the social outcome coincides with the realized individual preferences. Given a convex set of probability distributions of preferences, denoted by $P$, a voting rule is said to be $P$-robust if, for each probability distribution in $P$, at least one individual's responsiveness exceeds one-half. Our main result establishes that a voting rule is $P$-robust if and only if there exists a nonnegative weight vector such that the weighted average of individual responsiveness is strictly greater than one-half under every extreme point of $P$. In particular, if the set $P$ includes all degenerate distributions, a $P$-robust rule is a weighted majority rule without ties.
I develop a theoretical model to examine how the rise of autonomous AI (artificial intelligence) agents disrupts two-sided digital advertising markets. Through this framework, I demonstrate that users' rational, private decisions to delegate browsing to agents create a negative externality, precipitating declines in ad prices, publisher revenues, and overall market efficiency. The model identifies the conditions under which publisher interventions such as blocking AI agents or imposing tolls may mitigate these effects, although they risk fragmenting access and value. I formalize the resulting inefficiency as an "attention lemons" problem, where synthetic agent traffic dilutes the quality of attention sold to advertisers, generating adverse selection. To address this, I propose a Pigouvian correction mechanism: a per-delegation fee designed to internalize the externality and restore welfare. The model demonstrates that, for an individual publisher, charging AI agents toll fees for access strictly dominates both the 'Blocking' and 'Null (inaction)' strategies. Finally, I characterize a critical tipping point beyond which unchecked delegation triggers a collapse of the ad-funded ecosystem.
For centuries, financial institutions have responded to liquidity challenges by forming closed, centralized clearing clubs with strict rules and membership that allow them to collaborate on using the least money to discharge the most debt. As closed clubs, much of the general public has been excluded from participation. But the vast majority of private sector actors consists of micro or small firms that are vulnerable to late payments and generally ineligible for bank loans. This low liquidity environment often results in gridlock and leads to insolvency, and it disproportionately impacts small enterprises and communities. On the other hand, blockchain communities have developed open, decentralized settlement systems, along with a proliferation of store of value assets and new lending protocols, allowing anyone to permissionlessly transact and access credit. However, these protocols remain used primarily for speculative purposes, and so far have fallen short of the large-scale positive impact on the real economy prophesied by their promoters. We address these challenges by introducing Cycles, an open, decentralized clearing, settlement, and issuance protocol. Cycles is designed to enable firms to overcome payment inefficiencies, to reduce their working capital costs, and to leverage diverse assets and liquidity sources, including cryptocurrencies, stablecoins, and lending protocols, in service of clearing more debt with less money. Cycles solves real world liquidity challenges through a privacy-preserving multilateral settlement platform based on a graph optimization algorithm. The design is based on a core insight: liquidity resides within cycles in the payment network's structure and can be accessed via settlement flows optimized to reduce debt.
We examine functions representing the cumulative probability of a binomial random variable exceeding a threshold, expressed in terms of the success probability per trial. These functions are known to exhibit a unique inflection point. We generalize this property to their compositions and highlight its applications.
I study the optimal voting mechanism for a committee that must decide whether to enact or block a policy of unknown benefit. Information can come both from committee members who can acquire it at cost, and a strategic lobbyist who wishes the policy to be enacted. I show that the dictatorship of the most-demanding member is a dominant voting mechanism: any other voting mechanism is (i) less likely to enact a good policy, (ii) more likely to enact a bad policy, and (iii) burdens every member with a greater cost of acquiring information.
There exists a preference relation on infinite utility streams that does not discriminate between different periods, satisfies the Pareto criterion, and so that almost all pairs of utility streams are strictly comparable. Such a preference relation provides a counterexample to a claim in [Zame, William R. ``Can intergenerational equity be operationalized?'' Theoretical Economics 2.2 (2007): 187-202.]
We propose a general methodology for recovering preference parameters from data on choices and response times. Our methods yield estimates with fast ($1/n$ for $n$ data points) convergence rates when specialized to the popular Drift Diffusion Model (DDM), but are broadly applicable to generalizations of the DDM as well as to alternative models of decision making that make use of response time data. The paper develops an empirical application to an experiment on intertemporal choice, showing that the use of response times delivers predictive accuracy and matters for the estimation of economically relevant parameters.
1.1 Background Parks and the greening of schoolyards are examples of urban green spaces that have been praised for their environmental, social, and economic benefits in cities all over the world. More studies show that living near green spaces is good for property values. However, there is still disagreement about how strong and consistent these effects are in different cities (Browning et al., 2023; Grunewald et al., 2024; Teo et al., 2023). 1.2 Purpose This systematic review is the first to bring together a lot of geographical and statistical information that links greening schoolyards to higher property prices, as opposed to just green space in general. By focusing on schoolyard-specific interventions, we find complex spatial, economic, and social effects that are often missed in larger studies of green space. 1.3 Methods This review followed the PRISMA guidelines and did a systematic search and review of papers that were published in well-known journals for urban studies, the environment, and real estate. The criteria for inclusion stressed the use of hedonic pricing or spatial econometric models to look at the relationship between urban green space and home values in a quantitative way. Fifteen studies from North America, Europe, and Asia met the requirements for inclusion (Anthamatten et al., 2022; Wen et al., 2019; Li et al., 2019; Mansur & Yusuf, 2022).
This study develops a conceptual simulation model for a tokenized recycling incentive system that integrates blockchain infrastructure, market-driven pricing, behavioral economics, and carbon credit mechanisms. The model aims to address the limitations of traditional recycling systems, which often rely on static government subsidies and fail to generate sustained public participation. By introducing dynamic token values linked to real-world supply and demand conditions, as well as incorporating non-monetary behavioral drivers (e.g., social norms, reputational incentives), the framework creates a dual-incentive structure that can adapt over time. The model uses Monte Carlo simulations to estimate outcomes under a range of scenarios involving operational costs, carbon pricing, token volatility, and behavioral adoption rates. Due to the absence of real-world implementations of such integrated blockchain-based recycling systems, the paper remains theoretical and simulation-based. It is intended as a prototype framework for future policy experimentation and pilot projects. The model provides insights for policymakers, urban planners, and technology developers aiming to explore decentralized and market-responsive solutions to sustainable waste management. Future work should focus on validating the model through field trials or behavioral experiments.
We model a competitive market where AI agents buy answers from upstream generative models and resell them to users who differ in how much they value accuracy and in how much they fear hallucinations. Agents can privately exert effort for costly verification to lower hallucination risks. Since interactions halt in the event of a hallucination, the threat of losing future rents disciplines effort. A unique reputational equilibrium exists under nontrivial discounting. The equilibrium effort, and thus the price, increases with the share of users who have high accuracy concerns, implying that hallucination-sensitive sectors, such as law and medicine, endogenously lead to more serious verification efforts in agentic AI markets.
We examine receiver-optimal mechanisms for aggregating information that is divided across many biased senders. Each sender privately observes an unconditionally independent signal about an unknown state, so no sender can verify another's report. A receiver has a binary accept/reject decision which determines players' payoffs via the state. When information is divided across a small population and bias is low, the receiver-optimal mechanism coincides with the sender-preferred allocation, and can be implemented by letting senders \emph{confer} privately before reporting. However, for larger populations, we can benefit from the informational divide. We introduce a novel \emph{incentive-compatibility-in-the-large (ICL)} approach to solve the high-dimensional mechanism design problem for the large-population limit. We use this to show that optimal mechanisms converge to one that depends only on the accept payoff and punishes excessive consensus in the direction of the common bias. These surplus burning punishments imply payoffs are bounded away from the first-best level.
We examine a green transition policy involving a tax on brown goods in an economy where preferences for green consumption consist of a constant intrinsic individual component and an evolving social component. We analyse equilibrium dynamics when social preferences exert a positive externality in green consumption, creating complementarity between policy and preferences. The results show that accounting for this externality allows for a lower tax rate compared to policy ignoring the social norm effects. Furthermore, stability conditions permit gradual tax reductions or even removal along the transition path, minimising welfare losses. Thus, incorporating policy-preference interactions improves green transition policy design.
In digital advertising, online platforms allocate ad impressions through real-time auctions, where advertisers typically rely on autobidding agents to optimize bids on their behalf. Unlike traditional auctions for physical goods, the value of an ad impression is uncertain and depends on the unknown click-through rate (CTR). While platforms can estimate CTRs more accurately using proprietary machine learning algorithms, these estimates/algorithms remain opaque to advertisers. This information asymmetry naturally raises the following questions: how can platforms disclose information in a way that is both credible and revenue-optimal? We address these questions through calibrated signaling, where each prior-free bidder receives a private signal that truthfully reflects the conditional expected CTR of the ad impression. Such signals are trustworthy and allow bidders to form unbiased value estimates, even without access to the platform's internal algorithms. We study the design of platform-optimal calibrated signaling in the context of second-price auction. Our first main result fully characterizes the structure of the optimal calibrated signaling, which can also be computed efficiently. We show that this signaling can extract the full surplus -- or even exceed it -- depending on a specific market condition. Our second main result is an FPTAS for computing an approximately optimal calibrated signaling that satisfies an IR condition. Our main technical contributions are: a reformulation of the platform's problem as a two-stage optimization problem that involves optimal transport subject to calibration feasibility constraints on the bidders' marginal bid distributions; and a novel correlation plan that constructs the optimal distribution over second-highest bids.
This paper introduces a methodology for identifying and simulating financial and economic systems using stochastically structured reservoir computers (SSRCs). The proposed framework leverages structure-preserving embeddings and graph-informed coupling matrices to model inter-agent dynamics with enhanced interpretability. A constrained optimization scheme ensures that the learned models satisfy both stochastic and structural constraints. Two empirical case studies, a dynamic behavioral model of resource competition among agents, and regional inflation network dynamics, illustrate the effectiveness of the approach in capturing and anticipating complex nonlinear patterns and enabling interpretable predictive analysis under uncertainty.
Regimes routinely conceal acts of repression. We show that observed repression may be negatively correlated with total repression -- which includes both revealed and concealed acts -- across time and space. This distortion implies that policy interventions aimed at reducing repression by incentivizing regimes can produce perverse effects. It also poses challenges for research evaluating the efficacy of repression -- its deterrent and backlash effects. To address this, we develop a model in which regimes choose both whether to repress and whether to conceal repression. We leverage equilibrium relationships to propose a method for recovering concealed repression using observable data. We then provide an informational theory of deterrence and backlash effects, identifying the conditions under which each arises and intensifies. Finally, we show that comparing protest probabilities in the presence and absence of repression provides an upper bound on the size of the backlash effect, overstating its magnitude and thereby underestimating the efficacy of repression.
In statistical modeling, prediction and explanation are two fundamental objectives. When the primary goal is forecasting, it is important to account for the inherent uncertainty associated with estimating unknown outcomes. Traditionally, confidence intervals constructed using standard deviations have served as a formal means to quantify this uncertainty and evaluate the closeness of predicted values to their true counterparts. This approach reflects an implicit aim to capture the behavioral similarity between observed and estimated values. However, advances in similarity based approaches present promising alternatives to conventional variance based techniques, particularly in contexts characterized by large datasets or a high number of explanatory variables. This study aims to investigate which methods either traditional or similarity based are capable of producing narrower confidence intervals under comparable conditions, thereby offering more precise and informative intervals. The dataset utilized in this study consists of U.S. mega cap companies, comprising 42 firms. Due to the high number of features, interdependencies among predictors are common, therefore, Ridge Regression is applied to address this issue. The research findings indicate that variance based method and LCSS exhibit the highest coverage among the analyzed methods, although they produce broader intervals. Conversely, DTW, Hausdorff, and TWED deliver narrower intervals, positioning them as the most accurate methods, despite their medium coverage rates. Ultimately, the trade off between interval width and coverage underscores the necessity for context aware decision making when selecting similarity based methods for confidence interval estimation in time series analysis.