Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
We present a modified simulated annealing method with a dynamical choice of the cooling temperature. The latter is determined via a closed-loop control and is proven to yield exponential decay of the entropy of the particle system. The analysis is carried out through kinetic equations for interacting particle systems describing the simulated annealing method in an extended phase space. Decay estimates are derived under the quasi-invariant scaling of the resulting system of Boltzmann-type equations to assess the consistency with their mean-field limit. Numerical results are provided to illustrate and support the theoretical findings.
The study of flocking in biological systems has identified conditions for self-organized collective behavior, inspiring the development of decentralized strategies to coordinate the dynamics of swarms of drones and other autonomous vehicles. Previous research has focused primarily on the role of the time-varying interaction network among agents while assuming that the agents themselves are identical or nearly identical. Here, we depart from this conventional assumption to investigate how inter-individual differences between agents affect the stability and convergence in flocking dynamics. We show that flocks of agents with optimally assigned heterogeneous parameters significantly outperform their homogeneous counterparts, achieving 20-40% faster convergence to desired formations across various control tasks. These tasks include target tracking, flock formation, and obstacle maneuvering. In systems with communication delays, heterogeneity can enable convergence even when flocking is unstable for identical agents. Our results challenge existing paradigms in multi-agent control and establish system disorder as an adaptive, distributed mechanism to promote collective behavior in flocking dynamics.
Higher-order interactions are prevalent in real-world complex systems and exert unique influences on system evolution that cannot be captured by pairwise interactions. We incorporate game transitions into the higher-order prisoner's dilemma game model, where these transitions consistently promote cooperation. Moreover, in systems with game transitions, the fraction of higher-order interactions has a dual impact, either enhancing the emergence and persistence of cooperation or facilitating invasions that promote defection within an otherwise cooperative system.
We propose and analyze a model for the dynamics of the flow into and out of a nest for the arboreal turtle ant $\textit{Cephalotes goniodontus}$ during foraging to investigate a possible mechanism for the emergence of oscillations. In our model, there is mixed dynamic feedback between the flow of ants between different behavioral compartments and the concentration of pheromone along trails. On one hand, the ants deposit pheromone along the trail, which provides a positive feedback by increasing rates of return to the nest. On the other hand, pheromone evaporation is a source of negative feedback, as it depletes the pheromone and inhibits the return rate. We prove that the model is globally asymptotically stable in the absence of pheromone feedback. Then we show that pheromone feedback can lead to a loss of stability of the equilibrium and onset of sustained oscillations in the flow in and out of the nest via a Hopf bifurcation. This analysis sheds light on a potential key mechanism that enables arboreal turtle ants to effectively optimize their trail networks to minimize traveled path lengths and eliminate graph cycles.
A prediction makes a claim about a system's future given knowledge of its past. A retrodiction makes a claim about its past given knowledge of its future. We introduce the ambidextrous hidden Markov chain that does both optimally -- the bidirectional machine whose state structure makes explicit all statistical correlations in a stochastic process. We introduce an informational taxonomy to profile these correlations via a suite of multivariate information measures. While prior results laid out the different kinds of information contained in isolated measurements, in addition to being limited to single measurements the associated informations were challenging to calculate explicitly. Overcoming these via bidirectional machine states, we expand that analysis to information embedded across sequential measurements. The result highlights fourteen new interpretable and calculable information measures that fully characterize a process' informational structure. Additionally, we introduce a labeling and indexing scheme that systematizes information-theoretic analyses of highly complex multivariate systems. Operationalizing this, we provide algorithms to directly calculate all of these quantities in closed form for finitely-modeled processes.
Disorder is often considered detrimental to coherence. However, under specific conditions, it can enhance synchronization. We develop a machine-learning framework to design optimal disorder configurations that maximize phase synchronization. In particular, utilizing the system of coupled nonlinear pendulums with disorder and noise, we train a feedforward neural network (FNN), with the disorder parameters as input, to predict the Shannon entropy index that quantifies the phase synchronization strength. The trained FNN model is then deployed to search for the optimal disorder configurations in the high-dimensional space of the disorder parameters, providing a computationally efficient replacement of the stochastic differential equation solvers. Our results demonstrate that the FNN is capable of accurately predicting synchronization and facilitates an efficient inverse design solution to optimizing and enhancing synchronization.
We present a framework for controlling the collective phase of a system of coupled oscillators described by the Kuramoto model under the influence of a periodic external input by combining the methods of dynamical reduction and optimal control. We employ the Ott-Antonsen ansatz and phase-amplitude reduction theory to derive a pair of one-dimensional equations for the collective phase and amplitude of mutually synchronized oscillators. We then use optimal control theory to derive the optimal input for controlling the collective phase based on the phase equation and evaluate the effect of the control input on the degree of mutual synchrony using the amplitude equation. We set up an optimal control problem for the system to quickly resynchronize with the periodic input after a sudden phase shift in the periodic input, a situation similar to jet lag, and demonstrate the validity of the framework through numerical simulations.
We propose a Gaussian process regression framework with additive periodic kernels for the analysis of two-body interactions in coupled oscillator systems. While finite-order Fourier expansions determined by Bayesian methods can still yield artifacts such as a high-amplitude, high-frequency vibration, our additive periodic kernel approach has been demonstrated to effectively circumvent these issues. Furthermore, by exploiting the additive and periodic nature of the coupling functions, we significantly reduce the effective dimensionality of the inference problem. We first validate our method on simple coupled phase oscillators and demonstrate its robustness to more complex systems, including Van der Pol and FitzHugh-Nagumo oscillators, under conditions of biased or limited data. We next apply our approach to spiking neural networks modeled by Hodgkin-Huxley equations, in which we successfully recover the underlying interaction functions. These results highlight the flexibility and stability of Gaussian process regression in capturing nonlinear, periodic interactions in oscillator networks. Our framework provides a practical alternative to conventional methods, enabling data-driven studies of synchronized rhythmic systems across physics, biology, and engineering.
Flexible modulation of temporal dynamics in neural sequences underlies many cognitive processes. For instance, we can adaptively change the speed of motor sequences and speech. While such flexibility is influenced by various factors such as attention and context, the common neural mechanisms responsible for this modulation remain poorly understood. We developed a biologically plausible neural network model that incorporates neurons with multiple timescales and Hebbian learning rules. This model is capable of generating simple sequential patterns as well as performing delayed match-to-sample (DMS) tasks that require the retention of stimulus identity. Fast neural dynamics establish metastable states, while slow neural dynamics maintain task-relevant information and modulate the stability of these states to enable temporal processing. We systematically analyzed how factors such as neuronal gain, external input strength (contextual cues), and task difficulty influence the temporal properties of neural activity sequences - specifically, dwell time within patterns and transition times between successive patterns. We found that these factors flexibly modulate the stability of metastable states. Our findings provide a unified mechanism for understanding various forms of temporal modulation and suggest a novel computational role for neural timescale diversity in dynamically adapting cognitive performance to changing environmental demands.
Soft and active condensed matter represent a class of fascinating materials that we encounter in our everyday lives -- and constitute life itself. Control signals interact with the dynamics of these systems, and this influence is formalized in control theory and optimal control. Recent advances have employed various control-theoretical methods to design desired dynamics, properties, and functionality. Here we provide an introduction to optimal control aimed at physicists working with soft and active matter. We describe two main categories of control, feedforward control and feedback control, and their corresponding optimal control methods. We emphasize their parallels to Lagrangian and Hamiltonian mechanics, and provide a worked example problem. Finally, we review recent studies of control in soft, active, and related systems. Applying control theory to soft, active, and living systems will lead to an improved understanding of the signal processing, information flows, and actuation that underlie the physics of life.
Populations of agents often exhibit surprising collective behavior emerging from simple local interactions. The common belief is that the agents must posses a certain level of cognitive abilities for such an emerging collective behavior to occur. However, contrary to this assumption, it is also well known that even noncognitive agents are capable of displaying nontrivial behavior. Here we consider an intermediate case, where the agents borrow a little bit from both extremes. We assume a population of agents performing random-walk in a bounded environment, on a square lattice. The agents can sense their immediate neighborhood, and they will attempt to move into a randomly selected empty site, by avoiding collisions.Also, the agents will temporary stop moving when they are in contact with at least two other agents. We show that surprisingly, such a rudimentary population of agents undergoes a percolation phase transition and self-organizes in a large polymer like structure, as a consequence of an attractive entropic force emerging from their restricted-valence and local spatial arrangement.
The path toward the emergence of life in our biosphere involved several key events allowing for the persistence, reproduction and evolution of molecular systems. All these processes took place in a given environmental context and required both molecular diversity and the right non-equilibrium conditions to sustain and favour complex self-sustaining molecular networks capable of evolving by natural selection. Life is a process that departs from non-life in several ways and cannot be reduced to standard chemical reactions. Moreover, achieving higher levels of complexity required the emergence of novelties. How did that happen? Here, we review different case studies associated with the early origins of life in terms of phase transitions and bifurcations, using symmetry breaking and percolation as two central components. We discuss simple models that allow for understanding key steps regarding life origins, such as molecular chirality, the transition to the first replicators and cooperators, the problem of error thresholds and information loss, and the potential for "order for free" as the basis for the emergence of life.
Bacteria evolve in volatile environments and complex spatial structures. Migration, fluctuations, and environmental variability therefore have a significant impact on the evolution of microbial populations. We consider a class of spatially explicit metapopulation models arranged as regular (circulation) graphs where wild-type and mutant cells compete in a time-fluctuating environment where demes (subpopulations) are connected by slow cell migration. The carrying capacity is the same at each deme and endlessly switches between two values associated with harsh and mild environmental conditions. When the rate of switching is neither too slow nor too fast, the dynamics is characterised by bottlenecks and the population is prone to fluctuations or extinction. We analyse how slow migration, spatial structure, and fluctuations affect the phenomena of fixation and extinction on clique, cycle, and square lattice metapopulations. When the carrying capacity remains large, bottlenecks are weak, and deme extinction can be ignored. The dynamics is thus captured by a coarse-grained description within which the probability and mean time of fixation are obtained analytically. This allows us to show that, in contrast to what happens in static environments, the fixation probability depends on the migration rate. We also show that the fixation probability and mean fixation time can exhibit a non-monotonic dependence on the switching rate. When the carrying capacity is small under harsh conditions, bottlenecks are strong, and the metapopulation evolution is shaped by the coupling of deme extinction and strain competition. This yields rich dynamical scenarios, among which we identify the best conditions to eradicate mutants without dooming the metapopulation to extinction. We offer an interpretation of these findings in the context of an idealised treatment and discuss possible generalisations of our models.
Understanding collective self-organization in active matter, such as bird flocks and fish schools, remains a grand challenge in physics. Alignment interactions are essential for flocking, yet alone, they are generally considered insufficient to maintain cohesion against noise, forcing traditional models to rely on artificial boundaries or added attractive forces. Here, we report the first model to achieve cohesive flocking using purely alignment interactions, introducing predictive alignment: agents orient based on the predicted future headings of their neighbors. Implemented in a discrete-time Vicsek-type framework, this approach delivers robust, noise-resistant cohesion without additional parameters. In the stable regime, flock size scales linearly with interaction radius, remaining nearly immune to noise or propulsion speed, and the group coherently follows a leader under noise. These findings reveal how predictive strategies enhance self-organization, paving the way for a new class of active matter models blending physics and cognitive-like dynamics.
This perspective article investigates how auditory stimuli influence neural network dynamics using the FitzHugh-Nagumo (FHN) model and empirical brain connectivity data. Results show that synchronization is sensitive to both the frequency and amplitude of auditory input, with synchronization enhanced when input frequencies align with the system's intrinsic frequencies. Increased stimulus amplitude broadens the synchronization range governed by a delicate interplay involving the network's topology, the spatial location of the input, and the frequency characteristics of the cortical input signals. This perspective article also reveals that brain activity alternates between synchronized and desynchronized states, reflecting critical dynamics and phase transitions in neural networks. Notably, gamma-band synchronization is crucial for processing music, with coherence peaking in this frequency range. The findings emphasize the role of structural connectivity and network topology in modulating synchronization, providing insights into how music perception engages brain networks. This perspective article offers a computational framework for understanding neural mechanisms in music perception, with potential implications for cognitive neuroscience and music psychology.
Natural and human-made common goods present key challenges due to their susceptibility to degradation, overuse, or congestion. We explore the self-organisation of their usage when individuals have access to several available commons but limited information on them. We propose an extension of the Win-Stay, Lose-Shift (WSLS) strategy for such systems, under which individuals use a resource iteratively until they are unsuccessful and then shift randomly. This simple strategy leads to a distribution of the use of commons with an improvement against random shifting. Selective individuals who retain information on their usage and accordingly adapt their tolerance to failure in each common good improve the average experienced quality for an entire population. Hybrid systems of selective and non-selective individuals can lead to an equilibrium with equalised experienced quality akin to the ideal free distribution. We show that these results can be applied to the server selection problem faced by mobile users accessing Internet services and we perform realistic simulations to test their validity. Furthermore, these findings can be used to understand other real systems such as animal dispersal on grazing and foraging land, and to propose solutions to operators of systems of public transport or other technological commons.
A quest for uncovering influence of behaviour on team performance involves understanding individual behaviour, interactions with others and environment, variations across groups, and effects of interventions. Although insights into each of these areas have accumulated in sports science literature on football, it remains unclear how one can enhance team performance. We analyse influence of football players' behaviour on team performance in three-versus-one ball possession game by constructing and analysing a dynamical model. We developed a model for the motion of the players and the ball, which mathematically represented our hypotheses on players' behaviour and interactions. The model's plausibility was examined by comparing simulated outcomes with our experimental result. Possible influences of interventions were analysed through sensitivity analysis, where causal effects of several aspects of behaviour such as pass speed and accuracy were found. Our research highlights the potential of dynamical modelling for uncovering influence of behaviour on team effectiveness.
This paper presents a unified framework, integrating information theory and statistical mechanics, to connect metric failure in high-dimensional data with emergence in complex systems. We propose the "Information Dilution Theorem," demonstrating that as dimensionality ($d$) increases, the mutual information efficiency between geometric metrics (e.g., Euclidean distance) and system states decays approximately as $O(1/d)$. This decay arises from the mismatch between linearly growing system entropy and sublinearly growing metric entropy, explaining the mechanism behind distance concentration. Building on this, we introduce information structural complexity ($C(S)$) based on the mutual information matrix spectrum and interaction encoding capacity ($C'$) derived from information bottleneck theory. The "Emergence Critical Theorem" states that when $C(S)$ exceeds $C'$, new global features inevitably emerge, satisfying a predefined mutual information threshold. This provides an operational criterion for self-organization and phase transitions. We discuss potential applications in physics, biology, and deep learning, suggesting potential directions like MI-based manifold learning (UMAP+) and offering a quantitative foundation for analyzing emergence across disciplines.