Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
This document presents an Integer Linear Programming (ILP) approach to optimize pedestrian evacuation in flood-prone historic urban areas. The model aims to minimize total evacuation cost by integrating pedestrian speed, route length, and effort, while also selecting the optimal number and position of shelters. A modified minimum cost flow formulation is used to capture complex hydrodynamic and behavioral conditions within a directed street network. The evacuation problem is modeled through an extended graph representing the urban street network, where nodes and links simulate paths and shelters, including incomplete evacuations (deadly nodes), enabling accurate representation of real-world constraints and network dynamics.
The networked nature of supply chains makes them susceptible to systemic risk, where local firm failures can propagate through firm interdependencies that can lead to cascading supply chain disruptions. The systemic risk of supply chains can be quantified and is closely related to the topology and dynamics of supply chain networks (SCN). How different network properties contribute to this risk remains unclear. Here, we ask whether systemic risk can be significantly reduced by strategically rewiring supplier-customer links. In doing so, we understand the role of specific endogenously emerged network structures and to what extent the observed systemic risk is a result of fundamental properties of the dynamical system. We minimize systemic risk through rewiring by employing a method from statistical physics that respects firm-level constraints to production. Analyzing six specific subnetworks of the national SCNs of Ecuador and Hungary, we demonstrate that systemic risk can be considerably mitigated by 16-50% without reducing the production output of firms. A comparison of network properties before and after rewiring reveals that this risk reduction is achieved by changing the connectivity in non-trivial ways. These results suggest that actual SCN topologies carry unnecessarily high levels of systemic risk. We discuss the possibility of devising policies to reduce systemic risk through minimal, targeted interventions in supply chain networks through market-based incentives.
While most existing epidemic models focus on the influence of isolated factors, infectious disease transmission is inherently shaped by the complex interplay of multiple interacting elements. To better capture real-world dynamics, it is essential to develop epidemic models that incorporate diverse, realistic factors. In this study, we propose a coupled disease-information spreading model on multiplex networks that simultaneously accounts for three critical dimensions: media influence, higher-order interactions, and population mobility. This integrated framework enables a systematic analysis of synergistic spreading mechanisms under practical constraints and facilitates the exploration of effective epidemic containment strategies. We employ a microscopic Markov chain approach (MMCA) to derive the coupled dynamical equations and identify epidemic thresholds, which are then validated through extensive Monte Carlo (MC) simulations. Our results show that both mass media dissemination and higher-order network structures contribute to suppressing disease transmission by enhancing public awareness. However, the containment effect of higher-order interactions weakens as the order of simplices increases. We also explore the influence of subpopulation characteristics, revealing that increasing inter-subpopulation connectivity in a connected metapopulation network leads to lower disease prevalence. Furthermore, guiding individuals to migrate toward less accessible or more isolated subpopulations is shown to effectively mitigate epidemic spread. These findings offer valuable insights for designing targeted and adaptive intervention strategies in complex epidemic settings.
This systematic literature review seeks to explain the mechanisms and implications of information disorder for public policy and the democratic process, by proposing a five-stage framework capturing its full life cycle. To our knowledge, no prior reviews in the field of public administration have offered a comprehensive, integrated model of information disorder; most existing studies are situated within communication, information science, or data science, and tend to focus on isolated aspects of the phenomenon. By connecting concepts and stages with enabling factors, agents, tactics and impacts, we reframe information disorder not as a question of "truthiness", individual cognition, digital literacy, or merely of technology, but as a socio-material phenomenon, deeply embedded in and shaped by the material conditions of contemporary digital society. This approach calls for a shift away from fragmented interventions toward more holistic, system-level policy responses.
We derive the master equations for the Susceptible-Infected (SI) model on general hypernetworks with~$N$-body interactions. We solve these equations exactly for infinite~$d$-regular hypernetworks, and obtain an explicit solution for the expected infection level as a function of time. The solution shows that the epidemic spreads out to the entire population as~$t \to \infty$ if and only if the initial infection level exceeds a positive threshold value. This phase transition is a high-order interactions effect, which is absent with pairwise interactions.
AI models are rapidly becoming embedded in all aspects of nuclear energy research and work but the safety, security, and safeguards consequences of this embedding are not well understood. In this paper, we call for the creation of an anticipatory system of governance for AI in the nuclear sector as well as the creation of a global AI observatory as a means for operationalizing anticipatory governance. The paper explores the contours of the nuclear AI observatory and an anticipatory system of governance by drawing on work in science and technology studies, public policy, and foresight studies.
The main form of freeway traffic congestion is the familiar stop-and-go wave, characterized by wide moving jams that propagate indefinitely upstream provided enough traffic demand. They cause severe, long-lasting adverse effects, such as reduced traffic efficiency, increased driving risks, and higher vehicle emissions. This underscores the crucial importance of artificial intervention in the propagation of stop-and-go waves. Over the past two decades, two prominent strategies for stop-and-go wave suppression have emerged: variable speed limit (VSL) and jam-absorption driving (JAD). Although they share similar research motivations, objectives, and theoretical foundations, the development of these strategies has remained relatively disconnected. To synthesize fragmented advances and drive the field forward, this paper first provides a comprehensive review of the achievements in the stop-and-go wave suppression-oriented VSL and JAD, respectively. It then focuses on bridging the two areas and identifying research opportunities from the following perspectives: fundamental diagrams, traffic dynamics modeling, traffic state estimation and prediction, stochasticity, scenarios for strategy validation, and field tests and practical deployment. We expect that through this review, one area can effectively address its limitations by identifying and leveraging the strengths of the other, thus promoting the overall research goal of freeway stop-and-go wave suppression.
Open Large Language Models (OLLMs) are increasingly leveraged in generative AI applications, posing new challenges for detecting their outputs. We propose OpenTuringBench, a new benchmark based on OLLMs, designed to train and evaluate machine-generated text detectors on the Turing Test and Authorship Attribution problems. OpenTuringBench focuses on a representative set of OLLMs, and features a number of challenging evaluation tasks, including human/machine-manipulated texts, out-of-domain texts, and texts from previously unseen models. We also provide OTBDetector, a contrastive learning framework to detect and attribute OLLM-based machine-generated texts. Results highlight the relevance and varying degrees of difficulty of the OpenTuringBench tasks, with our detector achieving remarkable capabilities across the various tasks and outperforming most existing detectors. Resources are available on the OpenTuringBench Hugging Face repository at https://huggingface.co/datasets/MLNTeam-Unical/OpenTuringBench
Complex networks are frequently employed to model physical or virtual complex systems. When certain entities exist across multiple systems simultaneously, unveiling their corresponding relationships across the networks becomes crucial. This problem, known as network alignment, holds significant importance. It enhances our understanding of complex system structures and behaviours, facilitates the validation and extension of theoretical physics research about studying complex systems, and fosters diverse practical applications across various fields. However, due to variations in the structure, characteristics, and properties of complex networks across different fields, the study of network alignment is often isolated within each domain, with even the terminologies and concepts lacking uniformity. This review comprehensively summarizes the latest advancements in network alignment research, focusing on analyzing network alignment characteristics and progress in various domains such as social network analysis, bioinformatics, computational linguistics and privacy protection. It provides a detailed analysis of various methods' implementation principles, processes, and performance differences, including structure consistency-based methods, network embedding-based methods, and graph neural network-based (GNN-based) methods. Additionally, the methods for network alignment under different conditions, such as in attributed networks, heterogeneous networks, directed networks, and dynamic networks, are presented. Furthermore, the challenges and the open issues for future studies are also discussed.
With a folk understanding that political polarization refers to socio-political divisions within a society, many have proclaimed that we are more divided than ever. In this account, polarization has been blamed for populism, the erosion of social cohesion, the loss of trust in the institutions of democracy, legislative dysfunction, and the collective failure to address existential risks such as Covid-19 or climate change. However, at a global scale there is surprisingly little academic literature which conclusively supports these claims, with half of all studies being U.S.-focused. Here, we provide an overview of the global state of research on polarization, highlighting insights that are robust across countries, those unique to specific contexts, and key gaps in the literature. We argue that addressing these gaps is urgent, but has been hindered thus far by systemic and cultural barriers, such as regionally stratified restrictions on data access and misaligned research incentives. If continued cross-disciplinary inertia means that these disparities are left unaddressed, we see a substantial risk that countries will adopt policies to tackle polarization based on inappropriate evidence, risking flawed decision-making and the weakening of democratic institutions.
Global consumption of heat is vast and difficult to decarbonise, but it could present an opportunity for commercial fusion energy technology. The economics of supplying heat with fusion energy are explored in context of a future decarbonised energy system. A simple, generalised model is used to estimate the impact of selling heat on profitability, and compare it to selling electricity, for a variety of fusion proposed power plant permutations described in literature. Heat production has the potential to significantly improve the financial performance of fusion over selling electricity. Upon entering a highly electrified energy system, fusion should aim to operate as a grid-scale heat pump, avoiding both electrical conversion and recirculation costs whilst exploiting firm demand for high-value heat. This strategy is relatively high-risk, high-reward, but options are identified for hedging these risks. We also identify and discuss new avenues for competition in this domain, which would not exist if fusion supplies electricity only.
Shared micromobility (SMM) is often cited as a solution to the first/last mile problem of public transport (train) travel, yet when implemented, they often do not get adopted by a broader travelling public. A large part of behavioural adoption is related to peoples' attitudes and perceptions. In this paper, we develop an adjusted behavioural framework, based on the UTAUT2 technology acceptance framework. We carry out an exploratory factor analysis (EFA) to obtain attitudinal factors which we then use to perform a latent class cluster analysis (LCCA), with the goal of studying the potential adoption of SMM and to assess the various drivers and barriers as perceived by different user groups. Our findings suggest there are six distinct user groups with varying intention to use shared micromobility: Progressives, Conservatives, Hesitant participants, Bold innovators, Anxious observers and Skilled sceptics. Bold innovators and Progressives tend to be the most open to adopting SMM and are also able to do so. Hesitant participants would like to, but find it difficult or dangerous to use, while Skilled sceptics are capable and confident, but have limited intention of using it. Conservatives and Anxious observers are most negative about SMM, finding it difficult to use and dangerous. In general, factors relating to technological savviness, ease-of-use, physical safety and societal perception seem to be the biggest barriers to wider adoption. Younger, highly educated males are the group most likely and open to using shared micromobility, while older individuals with lower incomes and a lower level of education tend to be the least likely.
This paper presents the design, implementation, and evaluation of a didactic proposal on Rutherford's gold foil experiment, tailored for high schools. Grounded in constructivist pedagogy, the activity introduces key concepts of modern physics-often absent from standard curricula-through a hands on, inquiry-based approach. By employing analogical reasoning and black box modeling, students engage in experimental investigation and collaborative problem-solving to explore atomic structure. The activity was implemented as a case study with a class of first-year students (aged 14-15) from a applied science-focused secondary school in Italy. Data collection combined qualitative observations, structured discussions, and digital feedback tools to assess conceptual learning and student engagement. Findings indicate that well-designed, student-centered interventions can meaningfully support the development of abstract scientific understanding, while fostering critical thinking and collaborative skills.
In this article we have shown that the distributions of ksi satisfy an exponential law for real networks while the distributions of ksi for random networks are bell-shaped and closer to the normal distribution. The ksi distributions for Barabasi-Albert and Watts-Strogatz networks are similar to the ksi distributions for random networks (bell-shaped) for most parameters, but when these parameters become small enough, the Barabasi-Albert and Watts-Strogatz networks become more realistic with respect to the ksi distributions.
Synchrophasor technology is an emerging and developing technology for monitoring and control of wide area measurement systems (WAMS). In an elementary WAMS, two identical phasors measured at two different locations have difference in the phase angles measured since their reference waveforms are not synchronized with each other. Phasor measurement units (PMUs) measure input phasors with respect to a common reference wave based on the atomic clock pulses received from global positioning system (GPS) satellites, eliminating variation in the measured phase angles due to distant locations of the measurement nodes. This has found tremendous applications in quick fault detection, fault location analysis, accurate current, voltage, frequency and phase angle measurements in WAMS. Commercially available PMU models are often proven to be expensive for research and development as well as for grid integration projects. This research article proposes an economic PMU model optimized for accurate steadystate performance based on recursive discrete Fourier transform (DFT) and provides results and detailed analysis of the proposed PMU model as per the steady state compliance specifications of IEEE standard C37.118.1. Results accurate up to 13 digits after decimal point are obtained through the developed PMU model for both nominal and off-nominal frequency inputs in steady state.
Vaccine hesitancy and misinformation are significant barriers to achieving widespread vaccination coverage. Smaller public health departments may lack the expertise or resources to craft effective vaccine messaging. This paper explores the potential of ChatGPT-augmented messaging to promote confidence in vaccination uptake. We conducted a survey in which participants chose between pairs of vaccination messages and assessed which was more persuasive and to what extent. In each pair, one message was the original, and the other was augmented by ChatGPT. At the end of the survey, participants were informed that half of the messages had been generated by ChatGPT. They were then asked to provide both quantitative and qualitative responses regarding how knowledge of a message's ChatGPT origin affected their impressions. Overall, ChatGPT-augmented messages were rated slightly higher than the original messages. These messages generally scored better when they were longer. Respondents did not express major concerns about ChatGPT-generated content, nor was there a significant relationship between participants' views on ChatGPT and their message ratings. Notably, there was a correlation between whether a message appeared first or second in a pair and its score. These results point to the potential of ChatGPT to enhance vaccine messaging, suggesting a promising direction for future research on human-AI collaboration in public health communication.
Critical points separate distinct dynamical regimes of complex systems, often delimiting functional or macroscopic phases in which the system operates. However, the long-term prediction of critical regimes and behaviors is challenging given the narrow set of parameters from which they emerge. Here, we propose a framework to learn the rules that govern the dynamic processes of a system. The learned governing rules further refine and guide the representative learning of neural networks from a series of dynamic graphs. This combination enables knowledge-based prediction for the critical behaviors of dynamical networked systems. We evaluate the performance of our framework in predicting two typical critical behaviors in spreading dynamics on various synthetic and real-world networks. Our results show that governing rules can be learned effectively and significantly improve prediction accuracy. Our framework demonstrates a scenario for facilitating the representability of deep neural networks through learning the underlying mechanism, which aims to steer applications for predicting complex behavior that learnable physical rules can drive.
Recombinant growth theory highlights the pivotal role of cumulative knowledge in driving innovation. Although interconnected knowledge facilitates smoother dissemination, its connection to scientific disruption remains poorly understood. Here, we quantify knowledge dependence based on the degree to which references within a given paper's bibliography cite one another. Analyzing 53.8 million papers spanning six decades, we observe that papers built on independent knowledge have decreased over time. However, propensity score matching and regression analyses reveal that such papers are associated with greater scientific disruption, as those who cite them are less likely to cite their references. Moreover, a team's preference for independent knowledge amplifies its disruptive potential, regardless of team size, geographic distance, or collaboration freshness. Despite the disruptive nature, papers built on independent knowledge receive fewer citations and delayed recognition. Taken together, these findings fill a critical gap in our fundamental understanding of scientific innovation, revealing a universal law in peer recognition: Knowledge independence breeds disruption at the cost of impact.