Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
In this paper, we study a stochastic susceptible-infected-susceptible (SIS) epidemic model that includes an additional immigration process. In the presence of multiplicative noise, generated by environmental perturbations, the model exhibits noise-induced transitions. The bifurcation diagram has two distinct regions of unimodality and bimodality in which the steady-state probability distribution has one and two peaks, respectively. Apart from first-order transitions between the two regimes, a critical-point transition occurs at a cusp point with the transition belonging to the mean-field Ising universality class. The epidemic model shares these features with the well-known Horsthemke-Lefever model of population genetics. The effect of vaccination on the spread/containment of the epidemic in a stochastic setting is also studied. We further propose a general vaccine-hesitancy model, along the lines of Kirman's ant model, with the steady-state distribution of the fraction of the vaccine-willing population given by the Beta distribution. The distribution is shown to give a good fit to the COVID-19 data on vaccine hesitancy and vaccination. We derive the steady-state probability distribution of the basic reproduction number, a key parameter in epidemiology, based on a beta-distributed fraction of the vaccinated population. Our study highlights the universal features that epidemic and vaccine models share with other dynamical models.
The universal genetic code presents a fundamental paradox in molecular biology. Recent advances in synthetic biology have demonstrated that the code is remarkably flexible--organisms can survive with 61 codons instead of 64, natural variants have reassigned codons 38+ times, and fitness costs of recoding stem primarily from secondary mutations rather than code changes themselves. Yet despite billions of years of evolution and this proven flexibility, approximately 99% of life maintains an identical 64-codon genetic code. This extreme conservation cannot be fully explained by current evolutionary theory, which predicts far more variation given the demonstrated viability of alternatives. I propose that this paradox--evolutionary flexibility coupled with mysterious conservation--reveals unrecognized constraints on biological information systems. This paper presents testable predictions to distinguish between competing explanations: extreme network effects, hidden optimization parameters, or potentially, computational architecture constraints that transcend standard evolutionary pressures.
To fully exploit the potential of computational phylogenetic methods for cognate data one needs to leverage specific (complex) models an machine learning-based techniques. However, both approaches require datasets that are substantially larger than the manually collected cognate data currently available. To the best of our knowledge, there exists no feasible approach to automatically generate larger cognate datasets. We substantiate this claim by automatically extracting datasets from BabelNet, a large multilingual encyclopedic dictionary. We demonstrate that phylogenetic inferences on the respective character matrices yield trees that are largely inconsistent with the established gold standard ground truth trees. We also discuss why we consider it as being unlikely to be able to extract more suitable character matrices from other multilingual resources. Phylogenetic data analysis approaches that require larger datasets can therefore not be applied to cognate data. Thus, it remains an open question how, and if these computational approaches can be applied in historical linguistics.
Sustainability has been defined as meeting the needs of the present without compromising the ability of future generations to meet their own needs. But what are the needs of the present? And are they met? From the poor performance of the 2030 Sustainable Development Goals (SDG), defined by the UN in 2015, not even the collective needs of the present seem to be met. How to expect not to compromise the needs of the future? Is the achievement of global world goals incompatible with the characteristic processes of human evolution, as some authors have recently suggested? Simple mathematical models cannot capture the whole breadth of human experience and destiny. But, on the other hand, one should not neglect whatever insights they may provide. And what these models teach us is how the behavior pattern "Parochial cooperation - Conflict - Growth" was reached and how this pattern, in addition to leading to several types of crises, is also on the way of the global governance needed to achieve the SDG's
Decades of scientific inquiry have sought to understand how evolution fosters cooperation, a concept seemingly at odds with the belief that evolution should produce rational, self-interested individuals. Most previous work has focused on the evolution of cooperation among boundedly rational individuals whose decisions are governed by behavioral rules that do not need to be rational. Here, using an evolutionary model, we study how altruism can evolve in a community of rational agents and promote cooperation. We show that in both well-mixed and structured populations, a population of objectively rational agents is readily invaded by mutant individuals who make rational decisions but evolve a distorted (i.e., subjective) perception of their payoffs. This promotes behavioral diversity and gives rise to the evolution of rational, other-regarding agents who naturally solve all the known strategic problems of two-person, two-strategy games by perceiving their games as pure coordination games.
Epidemic forecasting tools embrace the stochasticity and heterogeneity of disease spread to predict the growth and size of outbreaks. Conceptually, stochasticity and heterogeneity are often modeled as branching processes or as percolation on contact networks. Mathematically, probability generating functions provide a flexible and efficient tool to describe these models and quickly produce forecasts. While their predictions are probabilistic-i.e., distributions of outcome-they depend deterministically on the input distribution of transmission statistics and/or contact structure. Since these inputs can be noisy data or models of high dimension, traditional sensitivity analyses are computationally prohibitive and are therefore rarely used. Here, we use statistical condition estimation to measure the sensitivity of stochastic polynomials representing noisy generating functions. In doing so, we can separate the stochasticity of their forecasts from potential noise in their input. For standard epidemic models, we find that predictions are most sensitive at the critical epidemic threshold (basic reproduction number $R_0 = 1$) only if the transmission is sufficiently homogeneous (dispersion parameter $k > 0.3$). Surprisingly, in heterogeneous systems ($k \leq 0.3$), the sensitivity is highest for values of $R_{0} > 1$. We expect our methods will improve the transparency and applicability of the growing utility of probability generating functions as epidemic forecasting tools.
In the context of population dynamics, identifying effective model features, such as fecundity and mortality rates, is generally a complex and computationally intensive process, especially when the dynamics are heterogeneous across the population. In this work, we propose a Weak form Scientific Machine Learning-based method for selecting appropriate model ingredients from a library of scientifically feasible functions used to model structured populations. This method uses extensions of the Weak form Sparse Identification of Nonlinear Dynamics (WSINDy) method to select the best-fitting ingredients from noisy time-series histogram data. This extension includes learning heterogeneous dynamics and also learning the boundary process of the model directly from the data. We additionally provide a cross-validation method which helps fine tune the recovered boundary process to the data. Several test cases are considered, demonstrating the method's performance for different previously studied models, including age and size-structured models. Through these examples, we examine both the advantages and limitations of the method, with a particular focus on the distinguishability of terms in the library.
A social norm defines what is good and what is bad in social contexts, as well as what to do based on such assessments. A stable social norm should be maintained against errors committed by its players. In addition, individuals may have different probabilities of errors in following the norm, and a social norm would be unstable if it benefited those who do not follow the norm carefully. In this work, we show that Simple Standing, which has been known to resist errors and mutants successfully, actually exhibits threshold behavior. That is, in a population of individuals playing the donation game according to Simple Standing, the residents can suppress the invasion of mutants with higher error proneness only if the residents' own error proneness is sufficiently low. Otherwise, the population will be invaded by mutants that commit assessment errors more frequently, and a series of such invasions will eventually undermine the existing social norm. This study suggests that the stability analysis of a social norm may have a different picture if the probability of error itself is regarded as an individual attribute.
1. Although environmental variability is expected to play a more prominent role under climate change, current demographic models that ignore the differential environmental histories of cohorts across generations are unlikely to accurately predict population dynamics and growth. The use of these approaches, which we collectively refer to as non time-structured models or nTSMs, will instead yield error-prone estimates by giving rise to a form of ecological memory loss due to their inability to account for the historical effects of past environmental exposure on subsequent growth rates. 2. To address this important issue, we introduce a new class of time-structured models or TSMs that accurately depict growth under variable environments by splitting seemingly homogeneous populations into distinct demographic cohorts based on their past exposure to environmental fluctuations. By accounting for this cryptic population structure, TSMs accurately simulate the historical effects of environmental variability, even when individuals exhibit different degrees of phenotypic plasticity. 3. Here, we provide a conceptual framework, the mathematical tools needed to simulate any TSM, and a closed form solution for simple exponential growth. We then show that traditional nTSMs yield large errors compared to TSMs when estimating population dynamics under fluctuating temperatures. Overall, TSMs represent a critical tool for predicting population growth in a variable world.
Many classes of phylogenetic networks have been proposed in the literature. A feature of many of these classes is that if one restricts a network in the class to a subset of its leaves, then the resulting network may no longer lie within this class. This has implications for their biological applicability, since some species -- which are the leaves of an underlying evolutionary network -- may be missing (e.g., they may have become extinct, or there are no data available for them) or we may simply wish to focus attention on a subset of the species. On the other hand, certain classes of networks are `closed' when we restrict to subsets of leaves, such as (i) the classes of all phylogenetic networks or all phylogenetic trees; (ii) the classes of galled networks, simplicial networks, galled trees; and (iii) the classes of networks that have some parameter that is monotone-under-leaf-subsampling (e.g., the number of reticulations, height, etc) bounded by some fixed value. It is easily shown that a closed subclass of phylogenetic trees is either all trees or a vanishingly small proportion of them (as the number of leaves grows). In this short paper, we explore whether this dichotomy phenomenon holds for other classes of phylogenetic networks, and their subclasses.
Researchers puzzle over questions as to how rare species survive extinction, and why a significant proportion of microbial taxa are dormant. Computational simulation modeling by a genetic algorithm provides some answers. First, a weak/rare/lowly-adapted species can obtain significantly higher fitness by resorting to sporadic dormancy; thereby the probability of extinction is reduced. Second, the extent of fitness-gain is greater when a higher fraction of the population is dormant; thus, the probability of species survival is greater for higher prevalence of dormancy. In sum, even when the environment is unfavorable initially and remains unchanged, sporadic dormancy enables a weak/rare species enhance the extent of favorable adaptation over time, successfully combating the forces of natural selection.
Tree-grass coexistence is a defining feature of savanna ecosystems, which play an important role in supporting biodiversity and human populations worldwide. While recent advances have clarified many of the underlying processes, how these mechanisms interact to shape ecosystem dynamics under environmental stress is not yet understood. Here, we present and analyze a minimalistic spatially extended model of tree-grass dynamics in dry savannas. We incorporate tree facilitation of grasses through shading and grass competing with trees for water, both varying with tree life stage. Our model shows that these mechanisms lead to grass-tree coexistence and bistability between savanna and grassland states. Moreover, the model predicts vegetation patterns consisting of trees and grasses, particularly under harsh environmental conditions, which can persist in situations where a non-spatial version of the model predicts ecosystem collapse from savanna to grassland instead (a phenomenon called ''Turing-evades-tipping''). Additionally, we identify a novel ''Turing-triggers-tipping'' mechanism, where unstable pattern formation drives tipping events that are overlooked when spatial dynamics are not included. These transient patterns act as early warning signals for ecosystem transitions, offering a critical window for intervention. Further theoretical and empirical research is needed to determine when spatial patterns prevent tipping or drive collapse.
Ecological communities are composed of species interactions that respond to environmental fluctuations. Despite increasing evidence of temporal variation in these interactions, most theoretical frameworks remain rooted in static assumptions. Here, we develop and apply a time-varying network model to five long-term ecological datasets spanning diverse taxa and environments. Using a generalized Lotka-Volterra framework with environmental covariates, we quantify temporal rewiring of interspecific interactions, asymmetry patterns, and structural stability. Our results reveal contrasting dynamics across ecosystems: in datasets with rich temporal resolution, interaction networks exhibit marked rewiring and shifts in cooperation-competition ratios that correlate with environmental stress, consistent, though not always linearly, with the stress-gradient hypothesis. Conversely, in datasets with coarser temporal sampling, networks retain constant interaction sign structure and remain in cooperation-dominated regimes. These findings highlight the importance of temporal resolution and environmental context in shaping ecological coexistence.
Unlike many physical nonequilibrium systems, in biological systems, the coupling to external energy sources is not a fixed parameter but adaptively controlled by the system itself. We do not have theoretical frameworks that allow for such adaptability. As a result, we cannot understand emergent behavior in living systems where structure formation and non-equilibrium drive coevolve. Here, using ecosystems as a model of adaptive systems, we develop a framework of living circuits whose architecture changes adaptively with the energy dissipated in each circuit edge. We find that unlike traditional nonequilibrium systems, living circuits exhibit a phase transition from equilibrium death to a nonequilibrium dissipative state beyond a critical driving potential. This transition emerges through a feedback mechanism that saves the weakest edges by routing dissipation through them, even though the adaptive rule locally rewards the strongest dissipating edges. Despite lacking any global optimization principle, living circuits achieve near-maximal dissipation, with higher drive promoting more complex circuits. Our work establishes ecosystems as paradigmatic examples of living circuits whose structure and dissipation are tuned through local adaptive rules.
People make strategic decisions multiple times a day. We act strategically in negotiations, when we coordinate our actions with others, or when we choose with whom to cooperate. The resulting dynamics can be studied with evolutionary game theory. This framework explores how people adapt their decisions over time, in light of how effective their strategies have proven to be. A crucial quantity in respective models is the strength of selection. This quantity regulates how likely individuals switch to a better strategy when given the choice. The larger the selection strength, the more biased is the learning process in favor of strategies with large payoffs. Therefore, this quantity is often interpreted as a measure of rationality. Traditionally, most models take selection strength to be a fixed parameter. Instead, here we allow the individuals' strategies and their selection strength to co-evolve. The endpoints of this co-evolutionary process depend on the strategic interaction in place. In many prisoner's dilemmas, selection strength increases indefinitely, as one may expect. However, in snowdrift or stag-hunt games, it can either converge to a finite value, or we observe evolutionary branching altogether - such that different individuals arrive at different selection strengths. Overall, this work sheds light on how evolution might shape learning mechanisms for social behavior. It suggests that boundedly rational learning is not only a by-product of cognitive constraints. Instead it might also evolve as a means to gain strategic advantages.
The human niche represents the intersection of biological, ecological, cultural, and technological processes that have co-evolved to shape human adaptation and societal complexity. This paper explores the human niche through the lens of macroecological scaling theory, seeking to define and quantify the dimensions along which human ecological strategies have diversified. By leveraging concepts from classic niche theory, niche construction, and complex adaptive systems, I develop a framework for understanding human ecology as both predictable within mammalian scaling relationships and uniquely divergent due to social, cognitive, and technological factors. Key dimensions of the human niche-metabolism, cognition, sociality, and computation-are examined through scaling laws that structure human interactions with the environment and each other. The paper demonstrates how human niche expansion over evolutionary time has been characterized by increasing metabolic consumption, information processing capacity, and the formation of larger, more interconnected social networks. This cumulative trajectory has led to the unprecedented scale of contemporary human societies, with implications for sustainability, economic development, and future niche expansion, including into space. The study underscores the need for an integrative, quantitative approach to human ecology that situates human adaptability within broader ecological and evolutionary constraints.
Cancer cells are often seen to prefer glycolytic metabolism over oxidative phosphorylation even in the presence of oxygen-a phenomenon termed the Warburg effect. Despite significant strides in the decades since its discovery, a clear basis is yet to be established for the Warburg effect and why cancer cells show such a preference for aerobic glycolysis. In this review, we draw on what is known about similar metabolic shifts both in normal mammalian physiology and overflow metabolism in microbes to shed new light on whether aerobic glycolysis in cancer represents some form of optimisation of cellular metabolism. From microbes to cancer, we find that metabolic shifts favouring glycolysis are sometimes driven by the need for faster growth, but the growth rate is by no means a universal goal of optimal metabolism. Instead, optimisation goals at the cellular level are often multi-faceted and any given metabolic state must be considered in the context of both its energetic costs and benefits over a range of environmental contexts. For this purpose, we identify the conceptual framework of resource allocation as a potential testbed for the investigation of the cost-benefit balance of cellular metabolic strategies. Such a framework is also readily integrated with dynamical systems modelling, making it a promising avenue for new answers to the age-old question of why cells, from cancers to microbes, choose the metabolic strategies they do.
We propose a compartmental model for epidemiology wherein the population is split into groups with either comply or refuse to comply with protocols designed to slow the spread of a disease. Parallel to the disease spread, we assume that noncompliance with protocols spreads as a social contagion. We begin by deriving the reproductive ratio for a deterministic version of the model, and use this to fully characterize the local stability of disease free equilibrium points. We then append the deterministic model with stochastic effects, specifically assuming that the transmission rate of the disease and the transmission rate of the social contagion are uncertain. We prove global existence and nonnegativity for our stochastic model. Then using suitably constructed stochastic Lyapunov functions, we analyze the behavior of the stochastic system with respect to certain disease free states. We demonstrate all of our results with numerical simulations.