Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
The networked nature of supply chains makes them susceptible to systemic risk, where local firm failures can propagate through firm interdependencies that can lead to cascading supply chain disruptions. The systemic risk of supply chains can be quantified and is closely related to the topology and dynamics of supply chain networks (SCN). How different network properties contribute to this risk remains unclear. Here, we ask whether systemic risk can be significantly reduced by strategically rewiring supplier-customer links. In doing so, we understand the role of specific endogenously emerged network structures and to what extent the observed systemic risk is a result of fundamental properties of the dynamical system. We minimize systemic risk through rewiring by employing a method from statistical physics that respects firm-level constraints to production. Analyzing six specific subnetworks of the national SCNs of Ecuador and Hungary, we demonstrate that systemic risk can be considerably mitigated by 16-50% without reducing the production output of firms. A comparison of network properties before and after rewiring reveals that this risk reduction is achieved by changing the connectivity in non-trivial ways. These results suggest that actual SCN topologies carry unnecessarily high levels of systemic risk. We discuss the possibility of devising policies to reduce systemic risk through minimal, targeted interventions in supply chain networks through market-based incentives.
The concept of quality of life in urban settings is increasingly associated to the accessibility of amenities within a short walking distance for residents. However, this narrative still requires thorough empirical investigation to evaluate the practical implications, benefits, and challenges. In this work, we propose a novel methodology for evaluating urban accessibility to services, with an application to the city of Florence, Italy. Our approach involves identifying the accessibility of essential services from residential buildings within a 10-minute walking distance, employing a rigorous spatial analysis process and open-source geospatial data. As a second contribution, we extend the concept of 10-minute accessibility within a network theory framework and apply a clustering algorithm to identify urban communities based on shared access to essential services. Finally, we explore the dimension of functional redundancy. Our proposed metrics represent a step forward towards an accurate assessment of the adherence to the 10-minute city model and offer a valuable tool for place-based policies aimed at addressing spatial disparities in urban development.
This perspective paper examines a fundamental paradox in the relationship between professional expertise and artificial intelligence: as domain experts increasingly collaborate with AI systems by externalizing their implicit knowledge, they potentially accelerate the automation of their own expertise. Through analysis of multiple professional contexts, we identify emerging patterns in human-AI collaboration and propose frameworks for professionals to navigate this evolving landscape. Drawing on research in knowledge management, expertise studies, human-computer interaction, and labor economics, we develop a nuanced understanding of how professional value may be preserved and transformed in an era of increasingly capable AI systems. Our analysis suggests that while the externalization of tacit knowledge presents certain risks to traditional professional roles, it also creates opportunities for the evolution of expertise and the emergence of new forms of professional value. We conclude with implications for professional education, organizational design, and policy development that can help ensure the codification of expert knowledge enhances rather than diminishes the value of human expertise.
This paper examines how Canadian firms balance the benefits of technology adoption against the rising risk of cyber security breaches. We merge data from the 2021 Canadian Survey of Digital Technology and Internet Use and the 2021 Canadian Survey of Cyber Security and Cybercrime to investigate the trade-off firms face when adopting digital technologies to enhance productivity and efficiency, balanced against the potential increase in cyber security risk. The analysis explores the extent of digital technology adoption, differences across industries, the subsequent impacts on efficiency, and associated cyber security vulnerabilities. We build aggregate variables, such as the Business Digital Usage Score and a cyber security incidence variable to quantify each firm's digital engagement and cyber security risk. A survey-weight-adjusted Lasso estimator is employed, and a debiasing method for high-dimensional logit models is introduced to identify the drivers of technological efficiency and cyber risk. The analysis reveals a digital divide linked to firm size, industry, and workforce composition. While rapid expansion of tools such as cloud services or artificial intelligence can raise efficiency, it simultaneously heightens exposure to cyber threats, particularly among larger enterprises.
The expansion of renewable energy sources leads to volatility in electricity generation within energy systems. Subsurface storage of hydrogen in salt caverns can play an important role in long-term energy storage, but their global potential is not fully understood. This study investigates the global status quo and how much hydrogen salt caverns can contribute to stabilizing future renewable energy systems. A global geological suitability and land eligibility analysis for salt cavern placement is conducted and compared with the derived long-term storage needs of renewable energy systems. Results show that hydrogen salt caverns can balance between 43% and 66% of the global electricity demand and exist in North America, Europe, China, and Australia. By sharing the salt cavern potential with neighboring countries, up to 85% of the global electricity demand can be stabilized by salt caverns. Therefore, global hydrogen can play a significant role in stabilizing renewable energy systems.
We examine future trajectories of the social cost of carbon, global temperatures, and carbon concentrations using the cost-benefit Dynamic Integrated Climate-Economy (DICE) model calibrated to the five Shared Socioeconomic Pathways (SSPs) under two mitigation scenarios: achieving net-zero carbon emissions by 2050 and by 2100. The DICE model is calibrated to align industrial and land-use carbon emissions with projections from six leading process-based integrated assessment models (IAMs): IMAGE, MESSAGE--GLOBIOM, AIM/CGE, GCAM, REMIND--MAgPIE and WITCH--GLOBIOM. We find that even with aggressive mitigation (net-zero by 2050), global temperatures are projected to exceed $2^\circ\text{C}$ above preindustrial levels by 2100, with estimates ranging from $2.5^\circ\text{C}$ to $2.7^\circ\text{C}$ across all SSPs and IAMs considered. Under the more lenient mitigation scenario (net-zero by 2100), global temperatures are projected to rise to between $3^\circ\text{C}$ and $3.7^\circ\text{C}$ by 2100. Additionally, the social cost of carbon is estimated to increase from approximately USD 30--50 in 2025 to USD 250--400 in 2100.
Large language models (LLMs) increasingly serve as human-like decision-making agents in social science and applied settings. These LLM-agents are typically assigned human-like characters and placed in real-life contexts. However, how these characters and contexts shape an LLM's behavior remains underexplored. This study proposes and tests methods for probing, quantifying, and modifying an LLM's internal representations in a Dictator Game -- a classic behavioral experiment on fairness and prosocial behavior. We extract ``vectors of variable variations'' (e.g., ``male'' to ``female'') from the LLM's internal state. Manipulating these vectors during the model's inference can substantially alter how those variables relate to the model's decision-making. This approach offers a principled way to study and regulate how social concepts can be encoded and engineered within transformer-based models, with implications for alignment, debiasing, and designing AI agents for social simulations in both academic and commercial applications.
Advances in generative AI have rapidly expanded the potential of computers to perform or assist in a wide array of tasks traditionally performed by humans. We analyze a large, real-world randomized experiment of over 6,000 workers at 56 firms to present some of the earliest evidence on how these technologies are changing the way knowledge workers do their jobs. We find substantial time savings on common core tasks across a wide range of industries and occupations: workers who make use of this technology spent half an hour less reading email each week and completed documents 12% faster. Despite the newness of the technology, nearly 40% of workers who were given access to the tool used it regularly in their work throughout the 6-month study.
We present evidence on how generative AI changes the work patterns of knowledge workers using data from a 6-month-long, cross-industry, randomized field experiment. Half of the 6,000 workers in the study received access to a generative AI tool integrated into the applications they already used for emails, document creation, and meetings. We find that access to the AI tool during the first year of its release primarily impacted behaviors that could be changed independently and not behaviors that required coordination to change: workers who used the tool spent 3 fewer hours, or 25% less time on email each week (intent to treat estimate is 1.4 hours) and seemed to complete documents moderately faster, but did not significantly change time spent in meetings.
Global consumption of heat is vast and difficult to decarbonise, but it could present an opportunity for commercial fusion energy technology. The economics of supplying heat with fusion energy are explored in context of a future decarbonised energy system. A simple, generalised model is used to estimate the impact of selling heat on profitability, and compare it to selling electricity, for a variety of fusion proposed power plant permutations described in literature. Heat production has the potential to significantly improve the financial performance of fusion over selling electricity. Upon entering a highly electrified energy system, fusion should aim to operate as a grid-scale heat pump, avoiding both electrical conversion and recirculation costs whilst exploiting firm demand for high-value heat. This strategy is relatively high-risk, high-reward, but options are identified for hedging these risks. We also identify and discuss new avenues for competition in this domain, which would not exist if fusion supplies electricity only.
We propose a field-theoretic framework that models money-debt dynamics in economic systems through a direct analogy to particle-hole creation in condensed matter physics. In this formulation, issuing credit generates a symmetric pair-money as a particle-like excitation and debt as its hole-like counterpart-embedded within a monetary vacuum field. The model is formalized via a second-quantized Hamiltonian that incorporates time-dependent perturbations to represent real-world effects such as interest and profit, which drive asymmetry and systemic imbalance. This framework successfully captures both macroeconomic phenomena, including quantitative easing (QE) and gold-backed monetary regimes, and microeconomic credit creation, under a unified quantum-like formalism. In particular, QE is interpreted as generating entangled-like pairs of currency and bonds, exhibiting systemic correlations akin to nonlocal quantum interactions. Asset-backed systems, on the other hand, are modeled as coherent superpositions that collapse upon use. This approach provides physicists with a rigorous and intuitive toolset to analyze economic behavior using many-body theory, laying the groundwork for a new class of models in econophysics and interdisciplinary field analysis.
This paper presents a realistic simulated stock market where large language models (LLMs) act as heterogeneous competing trading agents. The open-source framework incorporates a persistent order book with market and limit orders, partial fills, dividends, and equilibrium clearing alongside agents with varied strategies, information sets, and endowments. Agents submit standardized decisions using structured outputs and function calls while expressing their reasoning in natural language. Three findings emerge: First, LLMs demonstrate consistent strategy adherence and can function as value investors, momentum traders, or market makers per their instructions. Second, market dynamics exhibit features of real financial markets, including price discovery, bubbles, underreaction, and strategic liquidity provision. Third, the framework enables analysis of LLMs' responses to varying market conditions, similar to partial dependence plots in machine-learning interpretability. The framework allows simulating financial theories without closed-form solutions, creating experimental designs that would be costly with human participants, and establishing how prompts can generate correlated behaviors affecting market stability.
Using complete-count register data spanning three generations, we compare inter- and multigenerational transmission processes across municipalities in Sweden. We first document spatial patterns in intergenerational (parent-child) mobility, and study whether those patterns are robust to the choice of mobility statistic and the quality of the underlying microdata. We then ask whether there exists similar geographic variation in multigenerational mobility. Interpreting those patterns through the lens of a latent factor model, we identify which features of the transmission process vary across places.
We compare the performance of human and artificially intelligent (AI) decision makers in simple binary classification tasks where the optimal decision rule is given by Bayes Rule. We reanalyze choices of human subjects gathered from laboratory experiments conducted by El-Gamal and Grether and Holt and Smith. We confirm that while overall, Bayes Rule represents the single best model for predicting human choices, subjects are heterogeneous and a significant share of them make suboptimal choices that reflect judgement biases described by Kahneman and Tversky that include the ``representativeness heuristic'' (excessive weight on the evidence from the sample relative to the prior) and ``conservatism'' (excessive weight on the prior relative to the sample). We compare the performance of AI subjects gathered from recent versions of large language models (LLMs) including several versions of ChatGPT. These general-purpose generative AI chatbots are not specifically trained to do well in narrow decision making tasks, but are trained instead as ``language predictors'' using a large corpus of textual data from the web. We show that ChatGPT is also subject to biases that result in suboptimal decisions. However we document a rapid evolution in the performance of ChatGPT from sub-human performance for early versions (ChatGPT 3.5) to superhuman and nearly perfect Bayesian classifications in the latest versions (ChatGPT 4o).
Gallice and Monzon (2019) present a natural environment that sustains full cooperation in one-shot social dilemmas among a finite number of self-interested agents. They demonstrate that in a sequential public goods game, where agents lack knowledge of their position in the sequence but can observe some predecessors' actions, full contribution emerges in equilibrium due to agents' incentive to induce potential successors to follow suit. Furthermore, they show that this principle extends to a number of social dilemmas, with the prominent example that of the prisoner's dilemma. In this study, we experimentally test the theoretical predictions of this model in a multi- player prisoner's dilemma environment, where subjects are not aware of their position in the sequence and receive only partial information on past cooperating actions. We test the predictions of the model, and through rigorous structural econometric analysis, we test the descriptive capacity of the model against alternative behavioural strategies, such as conditional cooperation, altruistic play and free-riding behaviour. We find that the majority resorts to free-riding behaviour, around 30% is classified as Gallice and Monzon (2019) types, followed by those with social preference considerations and the unconditional altruists.
This paper exploits variation resulting from a series of federal and state Medicaid expansions between 1977 and 2017 to estimate the effects of children's increased access to public health insurance on the labor market outcomes of their mothers. The results imply that the extended Medicaid eligibility of children leads to positive labor supply responses of single mothers and to negative labor supply responses of married mothers. The analysis of mechanisms suggests that extended children's Medicaid eligibility positively affects take-up of Medicaid and health of children.
This study investigates the causal relationship between the COVID-19 pandemic and wage levels, aiming to provide a quantified assessment of the impact. While no significant evidence is found for long-term effects, the analysis reveals a statistically significant positive influence on wages in the short term, particularly within a one-year horizon. Contrary to common expectations, the results suggest that COVID-19 may have led to short-run wage increases. Several potential mechanisms are proposed to explain this counterintuitive outcome. The findings remain robust when controlling for other macroeconomic indicators such as GDP, considered here as a proxy for aggregate demand. The paper also addresses issues of external validity in the concluding section.
Children's travel behavior plays a critical role in shaping long-term mobility habits and public health outcomes. Despite growing global interest, little is known about the factors influencing travel mode choice of children for school journeys in Switzerland. This study addresses this gap by applying a random forest classifier - a machine learning algorithm - to data from the Swiss Mobility and Transport Microcensus, in order to identify key predictors of children's travel mode choice for school journeys. Distance consistently emerges as the most important predictor across all models, for instance when distinguishing between active vs. non-active travel or car vs. non-car usage. The models show relatively high performance, with overall classification accuracy of 87.27% (active vs. non-active) and 78.97% (car vs. non-car), respectively. The study offers empirically grounded insights that can support school mobility policies and demonstrates the potential of machine learning in uncovering behavioral patterns in complex transport datasets.