Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Can players sustain long-run trust when their equilibrium beliefs are shaped by machine-learning methods that penalize complexity? I study a game in which an infinite sequence of agents with one-period recall decides whether to place trust in their immediate successor. The cost of trusting is state-dependent. Each player's best response is based on a belief about others' behavior, which is a coarse fit of the true population strategy with respect to a partition of relevant contingencies. In equilibrium, this partition minimizes the sum of the mean squared prediction error and a complexity penalty proportional to its size. Relative to symmetric mixed-strategy Nash equilibrium, this solution concept significantly narrows the scope for trust.
We consider a repeated game in which players, considered as nodes of a network, are connected. Each player observes her neighbors' moves only. Thus, monitoring is private and imperfect. Players can communicate with their neighbors at each stage; each player, for any subset of her neighbors, sends the same message to any player of that subset. Thus, communication is local and both public and private. Both communication and monitoring structures are given by the network. The solution concept is perfect Bayesian equilibrium. In this paper we show that a folk theorem holds if and only if the network is 2-connected for any number of players.
We study a distributed learning problem in which learning agents are embedded in a directed acyclic graph (DAG). There is a fixed and arbitrary distribution over feature/label pairs, and each agent or vertex in the graph is able to directly observe only a subset of the features -- potentially a different subset for every agent. The agents learn sequentially in some order consistent with a topological sort of the DAG, committing to a model mapping observations to predictions of the real-valued label. Each agent observes the predictions of their parents in the DAG, and trains their model using both the features of the instance that they directly observe, and the predictions of their parents as additional features. We ask when this process is sufficient to achieve \emph{information aggregation}, in the sense that some agent in the DAG is able to learn a model whose error is competitive with the best model that could have been learned (in some hypothesis class) with direct access to \emph{all} features, despite the fact that no single agent in the network has such access. We give upper and lower bounds for this problem for both linear and general hypothesis classes. Our results identify the \emph{depth} of the DAG as the key parameter: information aggregation can occur over sufficiently long paths in the DAG, assuming that all of the relevant features are well represented along the path, and there are distributions over which information aggregation cannot occur even in the linear case, and even in arbitrarily large DAGs that do not have sufficient depth (such as a hub-and-spokes topology in which the spoke vertices collectively see all the features). We complement our theoretical results with a comprehensive set of experiments.
We prove that Ranked Pairs orders candidates in such a way as to minimize the $p$-norm, in the limit as $p \to \infty$, of those head-to-head margins of victory which go against its ordering.
We consider a set $E$ of $m$ indivisible goods and a set $N$ consisting of $n \geq 2$ agents. The paper shows that if two agents have \textit{arbitrary} set monotonic valuation functions and the remaining agents have size monotonic valuation functions, then EFX allocations always exist.
We study the classical assignment problem with initial endowments in a probabilistic framework. In this setting, each agent initially owns an object and has strict preferences over the entire set of objects, and the goal is to reassign objects in a way that satisfies desirable properties such as strategy-proofness, Pareto efficiency, and individual rationality. While the celebrated result by Ma (1994) shows that the Top Trading Cycles (TTC) rule is the unique deterministic rule satisfying these properties, similar positive results are scarce in the probabilistic domain. We extend Ma's result in the probabilistic setting, and as desirable properties, consider SD-efficiency, SD-individual rationality, and a weaker notion of SD-strategy-proofness -- SD-top-strategy-proofness -- which only requires agents to have no incentive to misreport if doing so increases the probability of receiving their top-ranked object. We show that under deterministic endowments, a probabilistic rule is SD-efficient, SD-individually rational, and SD-top-strategy-proof if and only if it coincides with the TTC rule. Our result highlights a positive possibility in the face of earlier impossibility results for fractional endowments (Athanassoglou and Sethuraman (2011)) and provides a first step toward reconciling desirable properties in probabilistic assignments with endowments.
We study a principal-agent model involving a large population of heterogeneously interacting agents. By extending the existing methods, we find the optimal contracts assuming a continuum of agents, and show that, when the number of agents is sufficiently large, the optimal contracts for the problem with a continuum of agents are near-optimal for the finite agents problem. We make comparative statistics and provide numerical simulations to analyze how the agents' connectivity affects the principal's value, the effort of the agents, and the optimal contracts.
We study the problem of allocating homogeneous and indivisible objects among agents with money. In particular, we investigate the relationship between egalitarian-equivalence (Pazner and Schmeidler, 1978), as a fairness concept, and efficiency under agents' incentive constraints. As a first result, we characterize the class of mechanisms that satisfy egalitarian-equivalence, strategy-proofness, individual rationality, and no subsidy. Our characterization reveals a strong tension between egalitarian-equivalence and efficiency: under these properties, the mechanisms allocate objects only in limited cases. To address this limitation, we replace strategy-proofness with the weaker incentive property, non-obvious manipulability (Troyan and Morrill, 2020). We show that this relaxation allows us to design mechanisms that achieve efficiency while still ensuring egalitarian-equivalence. Furthermore, upon achieving efficiency, we identify the agent optimal mechanism in the characterized class.
This paper studies matching markets where institutions are matched with possibly more than one individual. The matching market contains some couples who view the pair of jobs as complements. First, we show by means of an example that a stable matching may fail to exist even when both couples and institutions have responsive preferences. Next, we provide conditions on couples' preferences that are necessary and sufficient to ensure a stable matching for every preference profile where institutions may have any responsive preference. Finally, we do the same with respect to institutions' preferences, that is, we provide conditions on institutions' preferences that are necessary and sufficient to ensure a stable matching for every preference profile where couples may have any responsive preference.
We study many-to-one matching problems between institutions and individuals, where each institution may be matched to multiple individuals. The matching market includes couples, who view pairs of institutions as complementary. Institutions' preferences over sets of individuals are assumed to satisfy responsiveness, whereas couples' preferences over pairs of institutions may violate responsiveness. In this setting, we first assume that all institutions share a common preference ordering over individuals, and we establish: (i) a complete characterization of all couples' preference profiles for which a stable matching exists, under the additional assumption that couples violate responsiveness only to ensure co-location at the same institution, and (ii) a necessary and sufficient condition on the common institutional preference such that a stable matching exists when couples may violate responsiveness arbitrarily. Next, we relax the common preference assumption, requiring institutions to share a common ranking only over the members of each couple. Under this weaker assumption, we provide: (i) a complete characterization of all couples' preferences for which a stable matching exists, and (ii) a sufficient condition on individuals' preferences that guarantees the existence of a stable matching.
Which level of voting costs is optimal in a democracy? This paper argues that intermediate voting costs - what we term a "Midcost democracy" - should be avoided, as they fail to ensure that electoral outcomes reflect the preferences of the majority. We study a standard binary majority decision in which a majority of the electorate prefers alternative A over alternative B. The population consists of partisan voters, who always participate, and non-partisan voters, who vote only when they believe their participation could be pivotal, given that voting entails a cost. We show that the probability of the majority-preferred alternative A winning is non-monotonic in the level of voting costs. Specifically, when voting costs are either high or negligible, alternative A wins in all equilibria. However, at intermediate cost levels, this alignment breaks down. These findings suggest that democratic systems should avoid institutional arrangements that lead to moderate voting costs, as they may undermine the majority principle.
This paper undertakes an analysis of deforestation in the Amazon area using a pathways-based approach to sustainability. We ground the analysis primarily in the sustainability transitions literature but also draw a bridge with socio-ecological concepts which helps us to understand the nature of transitions in this context. The concept of a deforestation system is developed by examining the interplay of infrastructure, technologies, narratives, and institutions. Drawing on a literature review and an in-depth case study of Puerto Maldonado in Madre de Dios, Peru, the paper identifies three pathways for addressing deforestation: optimisation, natural capital, and regenerative change. We suggest that while the optimisation pathway provides partial solutions through mitigation and compensation strategies, it often reinforces extractivist logics. The study also underscores the limitations of natural capital frameworks, which tend to rely on centralised governance and market-based instruments while lacking broader social engagement. In contrast, our findings emphasise the potential of regenerative strategies rooted in local agency, community-led experimentation, and context-sensitive institutional arrangements. The paper contributes to ongoing debates on biodiversity governance by illustrating how the spatial and long-term dynamics of deforestation interact, and why inclusive, territorially grounded pathways are crucial for bending the curve of biodiversity loss.
This paper explores a novel extension of dynamic matching theory by analyzing a three-way matching problem involving agents from three distinct populations, each with two possible types. Unlike traditional static or two-way dynamic models, our setting captures more complex team-formation environments where one agent from each of the three populations must be matched to form a valid team. We consider two preference structures: assortative or homophilic, where agents prefer to be matched with others of the same type, and dis-assortative or heterophilic, where diversity within the team is valued. Agents arrive sequentially and face a trade-off between matching immediately or waiting for a higher quality match in the future albeit with a waiting cost. We construct and analyze the corresponding transition probability matrices for each preference regime and demonstrate the existence and uniqueness of stationary distributions. Our results show that stable and efficient outcomes can arise in dynamic, multi-agent matching environments, offering a deeper understanding of how complex matching processes evolve over time and how they can be effectively managed.
Minimal balanced collections are a generalization of partitions of a finite set of n elements and have important applications in cooperative game theory and discrete mathematics. However, their number is not known beyond n = 4. In this paper we investigate the problem of generating minimal balanced collections and implement the Peleg algorithm, permitting to generate all minimal balanced collections till n = 7. Secondly, we provide practical algorithms to check many properties of coalitions and games, based on minimal balanced collections, in a way which is faster than linear programming-based methods. In particular, we construct an algorithm to check if the core of a cooperative game is a stable set in the sense of von Neumann and Morgenstern. The algorithm implements a theorem according to which the core is a stable set if and only if a certain nested balancedness condition is valid. The second level of this condition requires generalizing the notion of balanced collection to balanced sets.
In this paper, I propose a new framework for representing multidimensional incomplete preferences through zonotope-valued utilities, addressing the shortcomings of traditional scalar and vector-based models in decision theory. Traditional approaches assign single numerical values to alternatives, failing to capture the complexity of preferences where alternatives remainmain incomparable due to conflicting criteria across multiple dimensions. Our method maps each alternative to a zonotope, a convex geometric object in \(\mathbb{R}^m\) formed by Minkowski sums of intervals, which encapsulates the multidimensional structure of preferences with mathematical rigor. The set-valued nature of these payoffs stems from multiple sources, including non-probabilistic uncertainty, such as imprecise utility evaluation due to incomplete information about criteria weights, and probabilistic uncertainty arising from stochastic decision environments. By decomposing preference relations into interval orders and utilizing an extended set difference operator, we establish a rigorous axiomatization that defines preference as one alternative's zonotope differing from another's within the non-negative orthant of \(\mathbb{R}^m\). This framework generalizes existing representations and provides a visually intuitive and theoretically robust tool for modeling trade-offs among each dimension, while preferences are incomparable.
We study $k$-price auctions in a complete information environment and characterize all pure-strategy Nash equilibrium outcomes. In a setting with $n$ agents having ordered valuations, we show that any agent, except those with the lowest $k-2$ valuations, can win in equilibrium. As a consequence, worst-case welfare increases monotonically as we go from $k=2$ (second-price auction) to $k=n$ (lowest-price auction), with the first-price auction achieving the highest worst-case welfare.
We prove that any optimal, independent, and zero unanimous fuzzy classification aggregation function of a continuum of individual classifications of $m\ge 3$ objects into $2\le p\le m$ types must be a weighted arithmetic mean. We also provide a characterization for the case when $m=p=2$.
We consider a mechanism design setting with a single item and a single buyer who is uncertain about the value of the item. Both the buyer and the seller have a common model for the buyer's value, but the buyer discovers her true value only upon receiving the item. Mechanisms in this setting can be interpreted as randomized refund mechanisms, which allocate the item at some price and then offer a (partial and/or randomized) refund to the buyer in exchange for the item if the buyer is unsatisfied with her purchase. Motivated by their practical importance, we study the design of optimal deterministic mechanisms in this setting. We characterize optimal mechanisms as virtual value maximizers for both continuous and discrete type settings. We then use this characterization, along with bounds on the menu size complexity, to develop efficient algorithms for finding optimal and near-optimal deterministic mechanisms.