Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
The Deferred Acceptance algorithm (DA) frequently produces Pareto inefficient allocations in school choice problems. While a number of efficient mechanisms that Pareto-dominate DA are available, a normative question remains unexplored: which students should benefit from efficiency enhancements? We address it by introducing the concept of \emph{maximally improvable students}, who benefit in every improvement over DA that includes as many students as possible in set-inclusion terms. We prove that common mechanisms such as Efficiency-Adjusted DA (EADA) and Top Trading Cycles applied to DA (DA+TTC) can fall significantly short of this benchmark. These mechanisms may only improve two maximally-improvable students when up to $n-1$ could benefit. Addressing this limitation, we develop the Maximum Improvement over DA mechanism (MIDA), which generates an efficient allocation that maximises the number of students improved over DA. We show that MIDA can generate fewer blocking pairs than EADA and DA+TTC, demonstrating that its distributional improvements need not come at the cost of high justified envy.
Many real matching markets encounter distributional and fairness constraints. Motivated by the Chinese Major Transition Program (CMT), this paper studies the design of exchange mechanisms within a fresh framework of both distributional and dual priority-respecting constraints. Specifically, each student has an initial assigned major and applies to transfer to a more desirable one. A student can successfully transfer majors only if they obtain eligibility from both their initial major and the applied major. Each major has a dual priority: a strict priority over current students who wish to transfer out and a strict priority over students from other majors who wish to transfer in. Additionally, each major faces a ceiling constraint and a floor constraint to regulate student distribution. We show that the existing mechanisms of CMT result in avoidable inefficiencies, and propose two mechanisms that can match students to majors in an efficient way as well as respecting each major's distributional and dual priority. The efficient mechanisms are based on a proposed solution concept: eligibility maximization (EM), and two processes for identifying improvement cycles--specifically, transfer-in exchangeable cycles and transfer-out exchangeable cycles.
We consider long-lived agents who interact repeatedly in a social network. In each period, each agent learns about an unknown state by observing a private signal and her neighbors' actions in the previous period before taking an action herself. Our main result shows that the learning rate of the slowest learning agent is bounded from above independently of the number of agents, the network structure, and the agents' strategies. Applying this result to equilibrium learning with rational agents shows that the learning rate of all agents in any equilibrium is bounded under general conditions. This extends recent findings on equilibrium learning and demonstrates that the limitation stems from an inherent tradeoff between optimal action choices and information revelation rather than strategic considerations.
Online selection problems frequently arise in applications such as crowdsourcing and employee recruitment. Existing research typically focuses on candidates with a single attribute. However, crowdsourcing tasks often require contributions from individuals across various demographics. Further motivated by the dynamic nature of crowdsourcing and hiring, we study the diversity-fair online selection problem, in which a recruiter must make real-time decisions to foster workforce diversity across many dimensions. We propose two scenarios for this problem. The fixed-capacity scenario, suited for short-term hiring for crowdsourced workers, provides the recruiter with a fixed capacity to fill temporary job vacancies. In contrast, in the unknown-capacity scenario, recruiters optimize diversity across recruitment seasons with increasing capacities, reflecting that the firm honors diversity consideration in a long-term employee acquisition strategy. By modeling the diversity over $d$ dimensions as a max-min fairness objective, we show that no policy can surpass a competitive ratio of $O(1/d^{1/3})$ for either scenario, indicating that any achievable result inevitably decays by some polynomial factor in $d$. To this end, we develop bilevel hierarchical randomized policies that ensure compliance with the capacity constraint. For the fixed-capacity scenario, leveraging marginal information about the arriving population allows us to achieve a competitive ratio of $1/(4\sqrt{d} \lceil \log_2 d \rceil)$. For the unknown-capacity scenario, we establish a competitive ratio of $\Omega(1/d^{3/4})$ under mild boundedness conditions. In both bilevel hierarchical policies, the higher level determines ex-ante selection probabilities and then informs the lower level's randomized selection that ensures no loss in efficiency. Both policies prioritize core diversity and then adjust for underrepresented dimensions.
This paper offers a proof of the Coase theorem by formalizing the notion of \textit{ideal exchanges}.
Dominance is a fundamental concept in game theory. In strategic-form games dominated strategies can be identified in polynomial time. As a consequence, iterative removal of dominated strategies can be performed efficiently as a preprocessing step for reducing the size of a game before computing a Nash equilibrium. For imperfect-information games in extensive form, we could convert the game to strategic form and then iteratively remove dominated strategies in the same way; however, this conversion may cause an exponential blowup in game size. In this paper we define and study the concept of dominated actions in imperfect-information games. Our main result is a polynomial-time algorithm for determining whether an action is dominated (strictly or weakly) by any mixed strategy in n-player games, which can be extended to an algorithm for iteratively removing dominated actions. This allows us to efficiently reduce the size of the game tree as a preprocessing step for Nash equilibrium computation. We explore the role of dominated actions empirically in the "All In or Fold" No-Limit Texas Hold'em poker variant.
In school choice, policymakers consolidate a district's objectives for a school into a priority ordering over students. They then face a trade-off between respecting these priorities and assigning students to more-preferred schools. However, because priorities are the amalgamation of multiple policy goals, some may be more flexible than others. This paper introduces a model that distinguishes between two types of priority: a between-group priority that ranks groups of students and must be respected, and a within-group priority for efficiently allocating seats within each group. The solution I introduce, the unified core, integrates both types. I provide a two-stage algorithm, the DA-TTC, that implements the unified core and generalizes both the Deferred Acceptance and Top Trading Cycles algorithms. This approach provides a method for improving efficiency in school choice while honoring policymakers' objectives.
In this paper, we ask the question of why the quality of commercial software, in terms of security and safety, does not measure up to that of other (durable) consumer goods we have come to expect. We examine this question through the lens of incentives. We argue that the challenge around better quality software is due in no small part to a sequence of misaligned incentives, the most critical of which being that the harm caused by software problems is by and large shouldered by consumers, not developers. This lack of liability means software vendors have every incentive to rush low-quality software onto the market and no incentive to enhance quality control. Within this context, this paper outlines a holistic technical and policy framework we believe is needed to incentivize better and more secure software development. At the heart of the incentive realignment is the concept of software liability. This framework touches on various components, including legal, technical, and financial, that are needed for software liability to work in practice; some currently exist, some will need to be re-imagined or established. This is primarily a market-driven approach that emphasizes voluntary participation but highlights the role appropriate regulation can play. We connect and contrast this with the EU legal environment and discuss what this framework means for open-source software (OSS) development and emerging AI risks. Moreover, we present a CrowdStrike case study complete with a what-if analysis had our proposed framework been in effect. Our intention is very much to stimulate a robust conversation among both researchers and practitioners.
The structure bilateral trading costs is one of the key features of international trade. Drawing upon the freeness-of-trade matrix, which allows the modeling of N-state trade costs, we develop a ``geometry of inconvenience'' to better understand how they impact equilbrium outcomes. The freeness-of-trade matrix was introduced in a model by Mossay and Tabuchi, where they essentially proved that if a freeness-of-trade matrix is positive definite, then the corresponding model admits a unique equilibrium. Drawing upon the spectral theory of metrics, we prove the model admits nonunique, perverse, equilibria. We use this result to provide a family of policy relevant bipartite examples, with substantive applications to economic sanctions. More generally, we show how the network structure of the freeness of trade is central to understanding the impacts of policy interventions.
This paper analyzes a society composed of individuals who have diverse sets of beliefs (or models) and diverse tastes (or utility functions). It characterizes the model selection process of a social planner who wishes to aggregate individuals' beliefs and tastes but is concerned that their beliefs are misspecified (or distorted). A novel impossibility result emerges: a utilitarian social planner who seeks robustness to misspecification never aggregates individuals' beliefs but instead behaves systematically as a dictator by selecting a single individual's belief. This tension between robustness and aggregation exists because aggregation yields policy-contingent beliefs, which are very sensitive to policy outcomes. Restoring possibility of belief aggregation requires individuals to have heterogeneous tastes and some common beliefs. This analysis reveals that misspecification has significant economic implications for welfare aggregation. These implications are illustrated in treatment choice, asset pricing, and dynamic macroeconomics.
Global trade of material goods involves the potential to create pathways for the spread of infectious pathogens. One trade sector in which this synergy is clearly critical is that of wildlife trade networks. This highly complex system involves important and understudied bidirectional coupling between the economic decision making of the stakeholders and the contagion dynamics on the emergent trade network. While each of these components are independently well studied, there is a meaningful gap in understanding the feedback dynamics that can arise between them. In the present study, we describe a general game theoretic model for trade networks of goods susceptible to contagion. The primary result relies on the acyclic nature of the trade network and shows that, through the course of trading with stochastic infections, the probability of infection converges to a directly computable fixed point. This allows us to compute best responses and thus identify equilibria in the game. We present ways to use this model to describe and evaluate trade networks in terms of global and individual risk of infection under a wide variety of structural or individual modifications to the trade network. In capturing the bidirectional coupling of the system, we provide critical insight into the global and individual drivers and consequences for risks of infection inherent in and arising from the global wildlife trade, and any economic trade network with associated contagion risks.
This paper presents a model of network formation and public goods provision in local communities. Here, networks can sustain public good provision by spreading information about people's behaviour. I find a critical threshold in network connectedness at which public good provision drops sharply, even though agents are highly heterogeneous. Technology change can tear a community's social fabric by pushing high-skilled workers to withdraw from their local community. This can help explain rising resentment toward perceived ``elites'' -- their withdrawal actively harms those left behind. Moreover, well-meaning policies that upskill workers can make them worse off by reducing network connectedness.
A primary challenge in collective decision-making is that achieving unanimous agreement is difficult, even at the level of criteria. The history of social choice theory illustrates this: numerous normative criteria on voting rules have been proposed; however, disagreements persist regarding which criteria should take precedence. This study addresses the problem of ranking alternatives based on the aggregation of opinions over criteria that the alternatives might fulfill. Using the opinion aggregation model, we propose a new rule, termed the Intersection Initial Segment (IIS) rule, and characterize it using five axioms: neutrality, independence of the worst set, independence of the best set, weak intersection very important player, and independence of non-unanimous improvement. We illustrate our approach on a running example where the objective is to rank voting rules, showing that our opinion aggregation model is particularly well-suited to this context, and that the IIS rule is a counterpart to the method discussed in Nurmi's paper (2015).
Reducing wealth inequality is a global challenge, and the problems of capitalism stem from the enclosure of the commons and the breakdown of the community. According to previous studies by Polanyi, Karatani, and Graeber, economic modes can be divided into capitalist market economy (enclosure and exchange), power economy (de-enclosure and redistribution), gift economy (obligation to return and reciprocity), and concession economy (de-obligation to return). The concession economy reflects Graeber's baseline communism (from each according to their abilities, to each according to their needs) and Deguchi's We-turn philosophy (the "I" as an individual has a "fundamental incapability" and the subject of physical action, responsibility, and freedom is "We" as a multi-agent system, including the "I"). In this study, we constructed novel network models for these four modes and compared their properties (cluster coefficient, graph density, reciprocity, assortativity, centrality, and Gini coefficient). From the calculation results, it became clear that the market economy leads to inequality; the power economy mitigates inequality but cannot eliminate it; the gift and concession economies lead to a healthy and equal economy; and the concession economy, free from the ties of obligation to return, is possible without guaranteeing reciprocity. We intend to promote the transformation from a capitalist economy to a concession economy through activities that disseminate baseline communism and the We-turn philosophy that promotes concession, that is, developing a cooperative platform to support concession through information technology and empirical research through fieldwork.
I consider the problem of classifying individual behavior in a simple setting of outcome performativity where the behavior the algorithm seeks to classify is itself dependent on the algorithm. I show in this context that the most accurate classifier is either a threshold or a negative threshold rule. A threshold rule offers the "good" classification to those individuals whose outcome likelihoods are greater than some cutpoint, while a negative threshold rule offers the "good" outcome to those whose outcome likelihoods are less than some cutpoint. While seemingly pathological, I show that a negative threshold rule can be the most accurate classifier when outcomes are performative. I provide an example of such a classifier, and extend the analysis to more general algorithm objectives, allowing the algorithm to differentially weigh false negatives and false positives, for example.
New fairness notions in align with the merit principle are proposed for designing exchange rules. We show that, for an obviously strategy-proof, efficient and individually rational rule, an upper bound of fairness attainable is that, if two agents possess objects considered the best by all others, then at least one receives her favorite object. Notably, it is not possible to guarantee them both receiving favorites. Our results thus indicate an unambiguous trade-off between incentives and fairness in the design of exchange rules.
An analyst observes an agent take a sequence of actions. The analyst does not have access to the agent's information and ponders whether the observed actions could be justified through a rational Bayesian model with a known utility function. We show that the observed actions cannot be justified if and only if there is a single deviation argument that leaves the agent better off, regardless of the information. The result is then extended to allow for distributions over possible action sequences. Four applications are presented: monotonicity of rationalization with risk aversion, a potential rejection of the Bayesian model with observable data, feasible outcomes in dynamic information design, and partial identification of preferences without assumptions on information.
Stabilizing cooperation among self-interested individuals presents a fundamental challenge in evolutionary theory and social science. While classical models predict the dominance of defection in social dilemmas, empirical and theoretical studies have identified various mechanisms that promote cooperation, including kin selection, reciprocity, and spatial structure. In this work, we investigate the role of localized imitation in the evolutionary dynamics of cooperation within an optional Public Goods Game (PGG). We introduce a model where individuals belong to distinct groups and adapt their strategies based solely on comparisons within their own group. We identify different dynamical regimes, including stable fixed points, limit cycles, and Rock-Scissors-Paper-type oscillations. Our analysis, grounded in a replicator-type framework, reveals that such group-level imitation can stabilize cooperative behavior, provided that groups are not initially polarized around a single strategy. In other words, restricting imitation to group-level interactions mitigates the destabilizing effects of global competition, providing a potential explanation for the resilience of cooperation in structured populations.