Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
No trade theorems examine conditions under which agents cannot agree to disagree on the value of a security which pays according to some state of nature, thus preventing any mutual agreement to trade. A large literature has examined conditions which imply no trade, such as relaxing the common prior and common knowledge assumptions, as well as allowing for agents who are boundedly rational or ambiguity averse. We contribute to this literature by examining conditions on the private information of agents that reveals, or verifies, the true value of the security. We argue that these conditions can offer insights in three different settings: insider trading, the connection of low liquidity in markets with no trade, and trading using public blockchains and oracles.
Mechanism design with inspection has received increasing attention due to its applications in the field. For example, large warehouses have started to auction scarce capacity. This capacity shall be allocated in a way that maximizes the seller's revenue. In such mechanism design problems, the seller can inspect the true value of a buyer and his realized sales in the next period without cost. Prior work on mechanism design with deferred inspection is based on the assumption of a common prior distribution. We design a mechanism with a deferred inspection that is (distributionally) robustly optimal either when the ambiguity-averse mechanism designer wants to maximize her worst-case expected payoff or when she wants to minimize her worst-case expected regret. It is a relatively simple mechanism with a concave allocation and linear payment rules. We also propose another robustly optimal mechanism that has the same concave allocation function but extracts the maximal payment from all the types of the agent, which can have a strictly higher expected payoff under non-worst-case distributions compared to the robustly optimal mechanism with the linear payment rule. We show that multi-bidder monotonous mechanisms might not exist.
The rise of Large Language Models (LLMs) has driven progress in reasoning tasks -- from program synthesis to scientific hypothesis generation -- yet their ability to handle ranked preferences and structured algorithms in combinatorial domains remains underexplored. We study matching markets, a core framework behind applications like resource allocation and ride-sharing, which require reconciling individual ranked preferences to ensure stable outcomes. We evaluate several state-of-the-art models on a hierarchy of preference-based reasoning tasks -- ranging from stable-matching generation to instability detection, instability resolution, and fine-grained preference queries -- to systematically expose their logical and algorithmic limitations in handling ranked inputs. Surprisingly, even top-performing models with advanced reasoning struggle to resolve instability in large markets, often failing to identify blocking pairs or execute algorithms iteratively. We further show that parameter-efficient fine-tuning (LoRA) significantly improves performance in small markets, but fails to bring about a similar improvement on large instances, suggesting the need for more sophisticated strategies to improve LLMs' reasoning with larger-context inputs.
We develop a simple yet efficient Lagrangian method for computing equilibrium prices in a mean-field game price-formation model. We prove that equilibrium prices are optimal in terms of a suitable criterion and derive a primal-dual gradient-based algorithm for computing them. One of the highlights of our computational framework is the efficient, simple, and flexible implementation of the algorithm using modern automatic differentiation techniques. Our implementation is modular and admits a seamless extension to high-dimensional settings with more complex dynamics, costs, and equilibrium conditions. Additionally, automatic differentiation enables a versatile algorithm that requires only coding the cost functions of agents. It automatically handles the gradients of the costs, thereby eliminating the need to manually form the adjoint equations.
Kidney Exchange Programmes (KEPs) facilitate the exchange of kidneys, and larger pools of recipient-donor pairs tend to yield proportionally more transplants, leading to the proposal of international KEPs (IKEPs). However, as studied by \citet{mincu2021ip}, practical limitations must be considered in IKEPs to ensure that countries remain willing to participate. Thus, we study IKEPs with country-specific parameters, represented by a tuple $\Gamma$, restricting the selected transplants to be feasible for the countries to conduct, e.g., imposing an upper limit on the number of consecutive exchanges within a country's borders. We provide a complete complexity dichotomy for the problem of finding a feasible (according to the constraints given by $\Gamma$) cycle packing with the maximum number of transplants, for every possible $\Gamma$. We also study the potential for countries to misreport their parameters to increase their allocation. As manipulation can harm the total number of transplants, we propose a novel individually rational and incentive compatible mechanism $\mathcal{M}_{\text{order}}$. We first give a theoretical approximation ratio for $\mathcal{M}_{\text{order}}$ in terms of the number of transplants, and show that the approximation ratio of $\mathcal{M}_{\text{order}}$ is asymptotically optimal. We then use simulations which suggest that, in practice, the performance of $\mathcal{M}_{\text{order}}$ is significantly better than this worst-case ratio.
Implementation theory has made significant advances in characterizing which social choice functions can be implemented in Nash equilibrium, but these results typically assume sophisticated strategic reasoning by agents. However, evidence exists to show that agents frequently cannot perform such reasoning. In this paper, we present a finite mechanism which fully implements Maskin-monotonic social choice functions as the outcome of the unique correlated equilibrium of the induced game. Due to the results in Hart and MasColell (2000), this yields that even when agents use a simple adaptive heuristic like regret minimization rather than computing equilibrium strategies, the designer can expect to implement the SCF correctly. We demonstrate the mechanism's effectiveness through simulations in a bilateral trade environment, where agents using regret matching converge to the desired outcomes despite having no knowledge of others' preferences or the equilibrium structure. The mechanism does not use integer games or modulo games.
Many outputs of cities scale in universal ways, including infrastructure, crime, and economic activity. Through a mathematical model, this study investigates the interplay between such scaling laws in human organization and governmental allocations of resources, focusing on impacts to migration patterns and social welfare. We find that if superlinear scaling resources of cities -- such as economic and social activity -- are the primary drivers of city dwellers' utility, then cities tend to converge to similar sizes and social welfare through migration. In contrast, if sublinear scaling resources, such as infrastructure, primarily impact utility, then migration tends to lead to megacities and inequity between large and small cities. These findings have implications for policymakers, economists, and political scientists addressing the challenges of equitable and efficient resource allocation.
Individuals often navigate several options with incomplete knowledge of their own preferences. Information provisioning tools such as public rankings and personalized recommendations have become central to helping individuals make choices, yet their value proposition under different marketplace environments remains unexplored. This paper studies a stylized model to explore the impact of these tools in two marketplace settings: uncapacitated supply, where items can be selected by any number of agents, and capacitated supply, where each item is constrained to be matched to a single agent. We model the agents utility as a weighted combination of a common term which depends only on the item, reflecting the item's population level quality, and an idiosyncratic term, which depends on the agent item pair capturing individual specific tastes. Public rankings reveal the common term, while personalized recommendations reveal both terms. In the supply unconstrained settings, both public rankings and personalized recommendations improve welfare, with their relative value determined by the degree of preference heterogeneity. Public rankings are effective when preferences are relatively homogeneous, while personalized recommendations become critical as heterogeneity increases. In contrast, in supply constrained settings, revealing just the common term, as done by public rankings, provides limited benefit since the total common value available is limited by capacity constraints, whereas personalized recommendations, by revealing both common and idiosyncratic terms, significantly enhance welfare by enabling agents to match with items they idiosyncratically value highly. These results illustrate the interplay between supply constraints and preference heterogeneity in determining the effectiveness of information provisioning tools, offering insights for their design and deployment in diverse settings.
The main purpose of this paper is to generalize some recent results obtained by Chilarescu and Manuel Gomez. Essentially, we are trying to study the effect of elasticity of substitution on the parameters of economic growth, based on its two possible values - lower and higher than one. We show that a higher elasticity of substitution increases per capita income, the relative share of physical capital, the common growth rate and the share of human capital allocated to the production sector, and this property is not affected by the position of the elasticity of substitution - below or above one.
Proportional response is a well-established distributed algorithm which has been shown to converge to competitive equilibria in both Fisher and Arrow-Debreu markets, for various sub-families of homogeneous utilities, including linear and constant elasticity of substitution utilities. We propose a natural generalization of proportional response for gross substitutes utilities, and prove that it converges to competitive equilibria in Fisher markets. This is the first convergence result of a proportional response style dynamics in Fisher markets for utilities beyond the homogeneous utilities covered by the Eisenberg-Gale convex program. We show an empirical convergence rate of $O(1/T)$ for the prices. Furthermore, we show that the allocations of a lazy version of the generalized proportional response dynamics converge to competitive equilibria in Arrow-Debreu markets.
We study resource allocation in two-sided markets from a fundamental perspective and introduce a general modeling and algorithmic framework to effectively incorporate the complex and multidimensional aspects of fairness. Our main technical contribution is to show the existence of a range of near-feasible resource allocations parameterized in different model primitives to give flexibility when balancing the different policymaking requirements, allowing policy designers to fix these values according to the specific application. To construct our near-feasible allocations, we start from a fractional resource allocation and perform an iterative rounding procedure to get an integer allocation. We show a simple yet flexible and strong sufficient condition for the target feasibility deviations to guarantee that the rounding procedure succeeds, exhibiting the underlying trade-offs between market capacities, agents' demand, and fairness. To showcase our framework's modeling and algorithmic capabilities, we consider three prominent market design problems: school allocation, stable matching with couples, and political apportionment. In each of them, we obtain strengthened guarantees on the existence of near-feasible allocations capturing the corresponding fairness notions, such as proportionality, envy-freeness, and stability.
We study network revenue management problems motivated by applications such as railway ticket sales and hotel room bookings. Request types that require a resource for consecutive stays sequentially arrive with known arrival probabilities. We investigate two scenarios: the reject-or-accept scenario, where the request can be fulfilled by any available resource, and the choice-based scenario, which generalizes the former by incorporating customer preferences through basic attraction models. We develop constant-factor approximation algorithms: $1-1/e$ for the reject-or-accept scenario and $0.125$ for the choice-based scenario.
The adoption of GenAI is fundamentally reshaping organizations in the knowledge economy. GenAI can significantly enhance workers' problem-solving abilities and productivity, yet it also presents a major reliability challenge: hallucinations, or errors presented as plausible outputs. This study develops a theoretical model to examine GenAI's impact on organizational structure and the role of human-in-the-loop oversight. Our findings indicate that successful GenAI adoption hinges primarily on maintaining hallucination rates below a critical level. After adoption, as GenAI advances in capability or reliability, organizations optimize their workforce by reducing worker knowledge requirements while preserving operational effectiveness through GenAI augmentation-a phenomenon known as deskilling. Unexpectedly, enhanced capability or reliability of GenAI may actually narrow the span of control, increasing the demand for managers rather than flattening organizational hierarchies. To effectively mitigate hallucination risks, many firms implement human-in-the-loop validation, where managers review GenAI-enhanced outputs before implementation. While the validation increases managerial workload, it can, surprisingly, expand the span of control, reducing the number of managers needed. Furthermore, human-in-the-loop validation influences GenAI adoption differently based on validation costs and hallucination rates, deterring adoption in low-error, high-cost scenarios, while promoting it in high-error, low-cost cases. Finally, productivity improvements from GenAI yield distinctive organizational shifts: as productivity increases, firms tend to employ fewer but more knowledgeable workers, gradually expanding managerial spans of control.
This paper studies when discrete choice data involving aggregated alternatives such as categorical data or an outside option can be rationalized by a random utility model (RUM). Aggregation introduces ambiguity in composition: the underlying alternatives may differ across individuals and remain unobserved by the analyst. We characterize the observable implications of RUMs under such ambiguity and show that they are surprisingly weak, implying only monotonicity with respect to adding aggregated alternatives and standard RUM consistency on unaggregated menus. These are insufficient to justify the use of an aggregated RUM. We identify two sufficient conditions that restore full rationalizability: non-overlapping preferences and menu-independent aggregation. Simulations show that violations of these conditions generate estimation bias, highlighting the practical importance of how aggregated alternatives are defined.
Consider a one-to-one two-sided matching market with workers on one side and single-position firms on the other, and suppose that the largest individually rational matching contains $n$ pairs. We show that the number of workers employed and positions filled in every stable matching is bounded from below by $\lceil\frac{n}{2}\rceil$ and we characterise the class of preferences that attain the bound.
We compare the efficiency of restaking and Proof-of-Stake (PoS) protocols in terms of stake requirements. First, we consider the sufficient condition for the restaking graph to be secure. We show that the condition implies that it is always possible to transform such a restaking graph into secure PoS protocols. Next, we derive two main results, giving upper and lower bounds on required extra stakes that one needs to add to validators of the secure restaking graph to be able to transform it into secure PoS protocols. In particular, we show that the restaking savings compared to PoS protocols can be very large and can asymptotically grow in the worst case as a square root of the number of validators. We also study a complementary question of transforming secure PoS protocols into an aggregate secure restaking graph and provide lower and upper bounds on the PoS savings compared to restaking.
B\'eal et al. (Int J Game Theory 54, 2025) introduce the Diversity Owen value for TU-games with diversity constraints, and provide axiomatic characterizations using the axioms of fairness and balanced contributions. However, there exist logical flaws in the proofs of the uniqueness of these characterizations. In this note we provide the corrected proofs of the characterizations by introducing the null player for diverse games axiom. Also, we establish an alternative characterization of the Diversity Owen value by modifying the axioms of the above characterizations.
The increasing vulnerability of power systems has heightened the need for operating reserves to manage contingencies such as generator outages, line failures, and sudden load variations. Unlike energy costs, driven by consumer demand, operating reserve costs arise from addressing the most critical credible contingencies - prompting the question: how should these costs be allocated through efficient pricing mechanisms? As an alternative to previously reported schemes, this paper presents a new causation-based pricing framework for electricity markets based on contingency-constrained energy and reserve scheduling models. Major salient features include a novel security charge mechanism along with the explicit definition of prices for up-spinning reserves, down-spinning reserves, and transmission services. These features ensure more comprehensive and efficient cost-reflective market operations. Moreover, the proposed nodal pricing scheme yields revenue adequacy and neutrality while promoting reliability incentives for generators based on the cost-causation principle. An additional salient aspect of the proposed framework is the economic incentive for transmission assets, which are remunerated based on their use to deliver energy and reserves across all contingency states. Numerical results from two case studies illustrate the performance of the proposed pricing scheme.