Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
The configuration model is a cornerstone of statistical assessment of network structure. While the Chung-Lu model is among the most widely used configuration models, it systematically oversamples edges between large-degree nodes, leading to inaccurate statistical conclusions. Although the maximum entropy principle offers unbiased configuration models, its high computational cost has hindered widespread adoption, making the Chung-Lu model an inaccurate yet persistently practical choice. Here, we propose fast and efficient sampling algorithms for the max-entropy-based models by adapting the Miller-Hagberg algorithm. Evaluation on 103 empirical networks demonstrates 10-1000 times speedup, making theoretically rigorous configuration models practical and contributing to a more accurate understanding of network structure.
This paper aims to explore the impact of tournament design on the incentives of the contestants. We develop a simulation framework to quantify the potential gain and loss from attacking based on changes in the probability of reaching the critical ranking thresholds. The model is applied to investigate the 2024/25 UEFA Champions League reform. The novel incomplete round-robin league phase is found to create more powerful incentives for offensive play than the previous group stage, with an average increase of 119\% (58\%) regarding the first (second) prize. Our study provides the first demonstration that the tournament format itself can strongly influence team behaviour in sports.
This paper presents a second-order model of capacity drop at expressway lane-drop bottlenecks. The model is an extension of Jin's model (Jin, 2017). This model captures not only the stationary state associated with the capacity drop but also the transitional dynamics leading from the onset of congestion to that state. The characteristics of the proposed model are examined theoretically and umerically. The results show that the capacity drop stationary state is stable and is reached immediately once congestion occurs. Furthermore, we validate the model using empirical data. The results suggest that the model has the potential to provide new insights into congestion phenomena at expressway lane-drop bottlenecks.
Accessibility is essential for designing inclusive urban systems. However, the attempt to capture the complexity of accessibility in a single universal metric has often limited its effective use in design, measurement, and governance across various fields. Building on the work of Levinson and Wu, we emphasise that accessibility consists of several key dimensions. Specifically, we introduce a conceptual framework that defines accessibility through three main dimensions: Proximity (which pertains to active, short-range accessibility to local services and amenities), Opportunity (which refers to quick access to relevant non-local resources, such as jobs or major cultural venues), and Value (which encompasses the overall quality and personal significance assigned to specific points of interest). While it is generally beneficial to improve accessibility, different users and contexts present unique trade-offs that make a one-size-fits-all solution neither practical nor desirable. Our framework establishes a foundation for a quantitative and integrative approach to modelling accessibility. It considers the complex interactions among its various dimensions and facilitates more systematic analysis, comparison, and decision-making across diverse contexts.
Epidemic control frequently relies on adjusting interventions based on prevalence. But designing such policies is a highly non-trivial problem due to uncertain intervention effects, costs and the difficulty of quantifying key transmission mechanisms and parameters. Here, using exact mathematical and computational methods, we reveal a fundamental limit in epidemic control in that prevalence feedback policies are outperformed by a single optimally chosen constant control level. Specifically, we find no incentive to use prevalence based control under a wide class of cost functions that depend arbitrarily on interventions and scale with infections. We also identify regimes where prevalence feedback is beneficial. Our results challenge the current understanding that prevalence based interventions are required for epidemic control and suggest that, for many classes of epidemics, interventions should not be varied unless the epidemic is near the herd immunity threshold.
The emergence of the Braess' paradox in road traffic systems demonstrates the positive effect of transportation planning in improving efficiency. By contrast, the phenomenon has rarely been examined in pedestrian evacuation traffic. Yet the possibility that Braess' paradox could lengthen evacuation times under hazardous conditions has received little systematic attention. In this paper, we investigate Braess' paradox in pedestrian evacuation traffic through a series of supervised experiments and corresponding traffic assignment models to examine its potential occurrence. Our empirical and modeling results indicate that Braess' paradox is unlikely to be a prevalent phenomenon in pedestrian traffic systems. Specifically, under autonomous evacuation and the assumption of complete network knowledge, the paradox does not arise in high-demand evacuation contexts in our case studies. Under a more realistic assumption of limited network knowledge, however, the paradox can occur. These findings highlight the importance of information conditions for evacuation performance and provide guidance for the design and management of large public venues.
We study the Susceptible-Infectious-Susceptible (SIS) model on arbitrary networks. The well-established pair approximation treats neighboring pairs of nodes exactly while making a mean field approximation for the rest of the network. We improve the method by expanding the state space dynamically, giving nodes a memory of when they last became susceptible. The resulting approximation is simple to implement and appears to be highly accurate, both in locating the epidemic threshold and in computing the quasi-stationary fraction of infected individuals above the threshold, for both finite graphs and infinite random graphs.
How regional heterogeneity in social and cultural processes drive--and respond to--climate dynamics is little studied. Here we present a coupled social-climate model stratified across five world regions and parameterized with geophysical, economic and social survey data. We find that support for mitigation evolves in a highly variable fashion across regions, according to socio-economics, climate vulnerability, and feedback from changing temperatures. Social learning and social norms can amplify existing sentiment about mitigation, leading to better or worse global warming outcomes depending on the region. Moreover, mitigation in one region, as mediated by temperature dynamics, can influence other regions to act, or just sit back, thus driving cross-regional heterogeneity in mitigation opinions. The peak temperature anomaly varies by several degrees Celsius depending on how these interactions unfold. Our model exemplifies a framework for studying how global geophysical processes interact with population-scale concerns to determine future sustainability outcomes.
In natural ecosystems and human societies, self-organized resource allocation and policy synergy are ubiquitous and significant. This work focuses on the synergy between Dual Reinforcement Learning Policies in the Minority Game (DRLP-MG) to optimize resource allocation. Our study examines a mixed-structured population with two sub-populations: a Q-subpopulation using Q-learning policy and a C-subpopulation adopting the classical policy. We first identify a synergy effect between these subpopulations. A first-order phase transition occurs as the mixing ratio of the subpopulations changes. Further analysis reveals that the Q-subpopulation consists of two internal synergy clusters (IS-clusters) and a single external synergy cluster (ES-cluster). The former contribute to the internal synergy within the Q-subpopulation through synchronization and anti-synchronization, whereas the latter engages in the inter-subpopulation synergy. Within the ES-cluster, the classical momentum strategy in the financial market manifests and assumes a crucial role in the inter-subpopulation synergy. This particular strategy serves to prevent long-term under-utilization of resources. However, it also triggers trend reversals and leads to a decrease in rewards for those who adopt it. Our research reveals that the frozen effect, in either the C- or Q-subpopulation, is a crucial prerequisite for synergy, consistent with previous studies. We also conduct mathematical analyses on subpopulation synergy effects and the synchronization and anti-synchronization forms of IS-clusters in the Q-subpopulation. Overall, our work comprehensively explores the complex resource-allocation dynamics in DRLP-MG, uncovers multiple synergy mechanisms and their conditions, enriching the theoretical understanding of reinforcement-learning-based resource allocation and offering valuable practical insights
Policies focused on deep decarbonization of regional economies emphasize electricity sector decarbonization alongside electrification of end-uses. There is growing interest in utilizing hydrogen (H2) produced via electricity to displace fossil fuels in difficult-to-electrify sectors. One such case is heavy-duty vehicles (HDV), which represent a substantial and growing share of transport emissions as light-duty vehicles electrify. Here, we assess the bulk energy system impact of decarbonizing the HDV segment via either H2, or drop-in synthetic liquid fuels produced from H2 and CO2. Our analysis soft-links two modeling approaches: (a) a bottom-up transport demand model producing a variety of final energy demand scenarios for the same service demand and (b) a multi-sectoral capacity expansion model that co-optimizes power, H2 and CO2 supply chains under technological and policy constraints to meet exogenous final energy demands. Through a case study of Western Europe in 2040 under deep decarbonization constraints, we quantify the energy system implications of different levels of H2 and synthetic fuels adoption in the HDV sector under scenarios with and without CO2 sequestration. In the absence of CO2 storage, substitution of liquid fossil fuels in HDVs is essential to meet the deep decarbonization constraint across the modeled power, H2 and transport sectors. Additionally, utilizing H2 HDVs reduces decarbonization costs and fossil liquids demand, but could increase natural gas consumption. While H2 HDV adoption reduces the need for direct air capture (DAC), synthetic fuel adoption increases DAC investments and total system costs. The study highlights the trade-offs across transport decarbonization pathways, and underscores the importance of multi-sectoral consideration in decarbonization studies.
Migration patterns are complex and context-dependent, with the distances migrants travel varying greatly depending on socio-economic and demographic factors. While global migration studies often focus on Western countries, there is a crucial gap in our understanding of migration dynamics within the African continent, particularly in West Africa. Using data from over 60,000 individuals from eight West African countries, this study examines the determinants of migration distance in the region. Our analysis reveals a bimodal distribution of migration distances: while most migrants travel locally within a hundred km, a smaller yet significant portion undertakes long-distance journeys, often exceeding 3,000 km. Socio-economic factors such as employment status, marital status and level of education play a decisive role in determining migration distances. Unemployed migrants, for instance, travel substantially farther (1,467 km on average) than their employed counterparts (295 km). Furthermore, we find that conflict-induced migration is particularly variable, with migrants fleeing violence often undertaking longer and riskier journeys. Our findings highlight the importance of considering both local and long-distance migration in policy decisions and support systems, as well as the need for a comprehensive understanding of migration in non-Western contexts. This study contributes to the broader discourse on human mobility by providing new insights into migration patterns in Western Africa, which in turn has implications for global migration research and policy development.
Indirect reciprocity promotes cooperation by allowing individuals to help others based on reputation rather than direct reciprocation. Because it relies on accurate reputation information, its effectiveness can be undermined by information gaps. We examine two forms of incomplete information: incomplete observation, in which donor actions are observed only probabilistically, and reputation fading, in which recipient reputations are sometimes classified as "Unknown". Using analytical frameworks for public assessment, we show that these seemingly similar models yield qualitatively different outcomes. Under incomplete observation, the conditions for cooperation are unchanged, because less frequent updates are exactly offset by higher reputational stakes. In contrast, reputation fading hinders cooperation, requiring higher benefit-to-cost ratios as the identification probability decreases. We then evaluate costly punishment as a third action alongside cooperation and defection. Norms incorporating punishment can sustain cooperation across broader parameter ranges without reducing efficiency in the reputation fading model. This contrasts with previous work, which found punishment ineffective under a different type of information limitation, and highlights the importance of distinguishing between types of information constraints. Finally, we review past studies to identify when punishment is effective and when it is not in indirect reciprocity.
The shortest-path percolation (SPP) model aims at describing the consumption, and eventual exhaustion, of a network's resources. Starting from a graph containing a macroscopic connected component, random pairs of nodes are sequentially selected, and, if the length of the shortest path connecting the node pairs is smaller than a tunable budget parameter, then all edges along such a path are removed from the network. As edges are progressively removed, the network eventually breaks into multiple microscopic components, undergoing a percolation-like transition. It is known that SPP transition on Erd\H{o}s-R\'enyi (ER) graphs belongs to same universality class as of the ordinary bond percolation if the budget parameter is finite; for unbounded budget instead, the SPP transition becomes more abrupt than the ordinary percolation transition. By means of large-scale numerical simulations and finite-size scaling analysis, here we study the SPP transition on random scale-free networks (SFNs) characterized by power-law degree distributions. We find, in contrast with standard percolation, that the transition is identical to the one observed on ER graphs, denoting independence from the degree exponent. Still, we distinguish finite- and infinite-budget SPP universality classes. Our findings follow from the fact that the SPP process drastically homogenizes the heterogeneous structure of SFNs before the SPP transition takes place.
Current approaches to the design and regulation of nuclear energy facilities offer limited opportunities for public input, particularly for host communities to shape decisions about a facility's aesthetics, socioeconomic, and environmental impacts, or even levels of safety. In this paper, we propose a community-engaged approach to designing microreactors. In a participatory design workshop, we invited community members to work with engineers to create designs for hypothetical microreactor facilities for Southeast Michigan as a way to understand their hopes, concerns, and preferences. Our findings reveal a desire for local energy infrastructure to not just provide a service (energy) but also to be a central and accessible feature of the community. Community members articulated several specific ways in which the hypothetical facilities could be designed, with particular focus placed on the well-being of local families as well as employment opportunities. These findings call into question current microreactor design trajectories that seek to achieve high levels of automation. Our findings also suggest a need for contextual design that may be at odds with the logics of standardization currently being pursued by reactor designers. We call on microreactor developers to carry out such participatory design engagements in other places as a way to build a more comprehensive, place-based understanding of local preferences for community-embedded energy infrastructure.
Comprehensive Life Cycle Assessment (LCA) as a tool to account for the full range of environmental impacts of resource use in commodities or services is a first step in reducing these impacts. There is an increasing necessity to account for these aspects in the planning, running and end-of-life of scientific experiments and research infrastructure. In the following, the concept for an Open Research Life Cycle Assessment (ORLCA) repository is presented to support this endeavour. It is designed to comply fully with the principles of findability, accessibility, interoperability, and reusability (FAIR).
Through its initiative known as the Climate Change Act (2008), the Government of the United Kingdom encourages corporations to enhance their environmental performance with the significant aim of reducing targeted greenhouse gas emissions by the year 2050. Previous research has predominantly assessed this encouragement favourably, suggesting that improved environmental performance bolsters governmental efforts to protect the environment and fosters commendable corporate governance practices among companies. Studies indicate that organisations exhibiting strong corporate social responsibility (CSR), environmental, social, and governance (ESG) criteria, or high levels of environmental performance often engage in lower occurrences of tax avoidance. However, our findings suggest that an increase in environmental performance may paradoxically lead to a rise in tax avoidance activities. Using a sample of 567 firms listed on the FTSE All Share from 2014 to 2022, our study finds that firms associated with higher environmental performance are more likely to avoid taxation. The study further documents that the effect is more pronounced for firms facing financial constraints. Entropy balancing, propensity score matching analysis, the instrumental variable method, and the Heckman test are employed in our study to address potential endogeneity concerns. Collectively, the findings of our study suggest that better environmental performance helps explain the variation in firms tax avoidance practices.
Many complex networks, ranging from social to biological systems, exhibit structural patterns consistent with an underlying hyperbolic geometry. Revealing the dimensionality of this latent space can disentangle the structural complexity of communities, impact efficient network navigation, and fundamentally shape connectivity and system behavior. We introduce a novel topological data analysis weighting scheme for graphs, based on chordless cycles, aimed at estimating the dimensionality of networks in a data-driven way. We further show that the resulting descriptors can effectively estimate network dimensionality using a neural network architecture trained in a synthetic graph database constructed for this purpose, which does not need retraining to transfer effectively to real-world networks. Thus, by combining cycle-aware filtrations, algebraic topology, and machine learning, our approach provides a robust and effective method for uncovering the hidden geometry of complex networks and guiding accurate modeling and low-dimensional embedding.
Many real-world networks, ranging from subway systems to polymer structures and fungal mycelia, do not form by the incremental addition of individual nodes but instead grow through the successive extension and intersection of lines or filaments. Yet most existing models for spatial network formation focus on node-based growth, leaving a significant gap in our understanding of systems built from spatially extended components. Here we introduce a minimal model for spatial networks, rooted in the iterative growth and intersection of lines-a mechanism inspired by diverse systems including transportation networks, fungal hyphae, and vascular structures. Unlike classical approaches, our model constructs networks by sequentially adding lines across a domain populated with randomly distributed points. Each line grows greedily to maximize local coverage, while subject to angular continuity and the requirement to intersect existing structures. This emphasis on extended, interacting elements governed by local optimization and geometric constraints leads to the spontaneous emergence of a core-and-branches architecture. The resulting networks display a range of non-trivial scaling behaviors: the number of intersections grows subquadratically; Flory exponents and fractal dimensions emerge consistent with empirical observations; and spatial scaling exponents depend on the heterogeneity of the underlying point distribution, aligning with measurements from subway systems. Our model thus captures key organizational features observed across diverse real-world networks, establishing a universal paradigm that goes beyond node-based approaches and demonstrates how the growth of spatially extended elements can shape the large-scale architecture of complex systems.