Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
The simulation of whole-brain dynamics should reproduce realistic spontaneous and evoked neural activity across different scales, including emergent rhythms, spatio-temporal activation patterns, and macroscale complexity. Once a mathematical model is selected, its configuration must be determined by properly setting its parameters. A critical preliminary step in this process is defining an appropriate set of observables to guide the selection of model configurations (parameter tuning), laying the groundwork for quantitative calibration of accurate whole-brain models. Here, we address this challenge by presenting a framework that integrates two complementary tools: The Virtual Brain (TVB) platform for simulating whole-brain dynamics, and the Collaborative Brain Wave Analysis Pipeline (Cobrawap) for analyzing the simulations using a set of standardized metrics. We apply this framework to a 998-node human connectome, using two configurations of the Larter-Breakspear neural mass model: one with the TVB default parameters, the other tuned using Cobrawap. The results reveal that the tuned configuration exhibits several biologically relevant features, absent in the default model for both spontaneous and evoked dynamics. In response to external perturbations, the tuned model generates non-stereotyped, complex spatio-temporal activity, as measured by the perturbational complexity index. In spontaneous activity, it displays robust alpha-band oscillations, infra-slow rhythms, scale-free characteristics, greater spatio-temporal heterogeneity, and asymmetric functional connectivity. This work demonstrates the potential of combining TVB and Cobrawap to guide parameter tuning and lays the groundwork for data-driven calibration and validation of accurate whole-brain models.
Neurons communicate through spikes, and spike timing is a crucial part of neuronal processing. Spike times can be recorded experimentally both intracellularly and extracellularly, and are the main output of state-of-the-art neural probes. On the other hand, neuronal activity is controlled at the molecular level by the currents generated by many different transmembrane proteins called ion channels. Connecting spike timing to ion channel composition remains an arduous task to date. To address this challenge, we developed a method that combines deep learning with a theoretical tool called Dynamic Input Conductances (DICs), which reduce the complexity of ion channel interactions into three interpretable components describing how neurons spike. Our approach uses deep learning to infer DICs directly from spike times and then generates populations of "twin" neuron models that replicate the observed activity while capturing natural variability in membrane channel composition. The method is fast, accurate, and works using only spike recordings. We also provide open-source software with a graphical interface, making it accessible to researchers without programming expertise.
This article presents a study involving 87 participants exposed to a stressful scenario in a virtual reality (VR) environment. An algorithm was developed to assign a positive or negative valence based on questionnaire responses. EEG signals were recorded, and a k-nearest neighbors (KNN) algorithm was trained to infer emotional valence from these signals. Our objective is to further develop mathematical models capable of describing the dynamic evolution of emotional and mental states.
In this paper a novel optical illusion is described in which purple structures are perceived as purple at the point of fixation, while the surrounding structures of the same purple color are perceived toward a blue hue. As the viewing distance increases, a greater number of purple structures revert to a purple appearance.
Neural systems face the challenge of maintaining reliable representations amid variations from plasticity and spontaneous activity. In particular, the spontaneous dynamics in neuronal circuit is known to operate near a highly variable critical state, which intuitively contrasts with the requirement of reliable representation. It is intriguing to understand how reliable representation could be maintained or even enhanced by critical spontaneous states. We firstly examined the co-existence of the scale-free avalanche in the spontaneous activity of mouse visual cortex with restricted representational geometry manifesting representational reliability amid the representational drift with respect to the visual stimulus. To explore how critical spontaneous state influences the neural representation, we built an excitation-inhibition network with homeostatic plasticity, which self-organizes to the critical spontaneous state. This model successfully reproduced both representational drift and restricted representational geometry observed experimentally, in contrast with randomly shuffled plasticity which causes accumulated drift of representational geometry. We further showed that the self-organized critical state enhances the cross-session low-dimensional representation, comparing to the non-critical state, by restricting the synapse weight into a low variation space. Our findings suggest that spontaneous self-organized criticality serves not only as a ubiquitous property of neural systems but also as a functional mechanism for maintaining reliable information representation under continuously changing networks, providing a potential explanation how the brain maintains consistent perception and behavior despite ongoing synaptic rewiring.
Background: Outdoor navigation poses significant challenges for people with blindness or low vision, yet the role of gaze behavior in supporting mobility remains underexplored. Fully sighted individuals typically adopt consistent scanning strategies, whereas those with visual impairments rely on heterogeneous adaptations shaped by residual vision and experience. Methods: We conducted a comparative eye-tracking study of fully sighted, low vision, blind, and fully blind participants navigating outdoor routes. Using a wearable eye tracker, we quantified fixation counts, fixation rate, fixation area, direction, peak fixation location, and walking speed. Results: Walking speed declined systematically with worsening vision. Fixation count increased with greater impairment, reflecting slower travel times and more frequent sampling. Fixation rate rose with worsening vision, though between-group differences were generally not significant between most groups. Fixation spatial coverage decreased along the continuum of vision loss. Fixation patterns were most consistent in the fully sighted group. Peak fixation locations were centered in fully sighted participants but shifted outward and became more variable with impairment. Conclusion: Gaze strategies during navigation form a graded continuum across vision groups, with fully sighted and fully blind participants at opposite poles and low vision and blind groups spanning the middle. Visual acuity alone does not predict functional gaze use, as rehabilitation experience and adaptive strategies strongly shape behavior. These findings highlight the need for personalized rehabilitation and assistive technologies, with residual gaze patterns offering insight into mobility capacity and training opportunities for safer navigation.
Several definitions of phase have been proposed for stochastic oscillators, among which the mean-return-time phase and the stochastic asymptotic phase have drawn particular attention. Quantitative comparisons between these two definitions have been done in previous studies, but physical interpretations of such a relation are still missing. In this work, we illustrate this relation using the geometric phase, which is an essential concept in both classical and quantum mechanics. We use properties of probability currents and the generalized Doob's h-transform to explain how the geometric phase arises in stochastic oscillators. Such an analogy is also reminiscent of the noise-induced phase shift in oscillatory systems with deterministic perturbation, allowing us to compare the phase responses in deterministic and stochastic oscillators. The resulting framework unifies these distinct phase definitions and reveals that their difference is governed by a geometric drift term analogous to curvature. This interpretation bridges spectral theory, stochastic dynamics, and geometric phase, and provides new insight into how noise reshapes oscillatory behavior. Our results suggest broader applications of geometric-phase concepts to coupled stochastic oscillators and neural models.
Consciousness spans macroscopic experience and microscopic neuronal activity, yet linking these scales remains challenging. Prevailing theories, such as Integrated Information Theory, focus on a single scale, overlooking how causal power and its dynamics unfold across scales. Progress is constrained by scarce cross-scale data and difficulties in quantifying multiscale causality and dynamics. Here, we present a machine learning framework that infers multiscale causal variables and their dynamics from near-cellular-resolution calcium imaging in the mouse dorsal cortex. At lower levels, variables primarily aggregate input-driven information, whereas at higher levels they realize causality through metastable or saddle-point dynamics during wakefulness, collapsing into localized, stochastic dynamics under anesthesia. A one-dimensional top-level conscious variable captures the majority of causal power, yet variables across other scales also contribute substantially, giving rise to high emergent complexity in the conscious state. Together, these findings provide a multiscale causal framework that links neural activity to conscious states.
Effective analysis in neuroscience benefits significantly from robust conceptual frameworks. Traditional metrics of interbrain synchrony in social neuroscience typically depend on fixed, correlation-based approaches, restricting their explanatory capacity to descriptive observations. Inspired by the successful integration of geometric insights in network science, we propose leveraging discrete geometry to examine the dynamic reconfigurations in neural interactions during social exchanges. Unlike conventional synchrony approaches, our method interprets inter-brain connectivity changes through the evolving geometric structures of neural networks. This geometric framework is realized through a pipeline that identifies critical transitions in network connectivity using entropy metrics derived from curvature distributions. By doing so, we significantly enhance the capacity of hyperscanning methodologies to uncover underlying neural mechanisms in interactive social behavior.
Neural mass models describe the mean-field dynamics of populations of neurons. In this work we illustrate how fundamental ideas of physics, such as energy and conserved quantities, can be explored for such models. We show that time-rescaling renders recent next-generation neural mass models Hamiltonian in the limit of a homogeneous population or strong coupling. The corresponding energy-like quantity provides considerable insight into the model dynamics even in the case of heterogeneity, and explain for example why orbits are near-ellipsoidal and predict spike amplitude during bursting dynamics. We illustrate how these energy considerations provide a possible link between neuronal population behavior and energy landscape theory, which has been used to analyze data from brain recordings. Our introduction of near-Hamiltonian descriptions of neuronal activity could permit the application of highly developed physics theory to get insight into brain behavior.
Electroencephalographic neurofeedback (EEG-NF) has been proposed as a promising technique to modulate brain activity through real-time EEG-based feedback. Alpha neurofeedback in particular is believed to induce rapid self-regulation of brain rhythms, with applications in cognitive enhancement and clinical treatment. However, whether this modulation reflects specific volitional control or non-specific influences remains unresolved. In a preregistered, double-blind, sham-controlled study, we evaluated alpha upregulation in healthy participants receiving either genuine or sham EEG-NF during a single-session design. A third arm composed of a passive control group was also included to differentiate between non-specific influences related or not to the active engagement in EEG-NF. Throughout the session, alpha power increased robustly, yet independently of feedback veracity, engagement in self-regulation, or feedback update frequency. Parallel increases in theta and sensorimotor rhythms further suggest broadband non-specific modulation. Importantly, these results challenge the foundational assumption of EEG-NF: that feedback enables volitional EEG control. Instead, they point to spontaneous repetition-related processes as primary drivers, calling for a critical reassessment of neurofeedback efficacy and its underlying mechanisms.
The cerebellum is implicated in nearly every domain of human cognition, yet our understanding of how this subcortical structure contributes to cognition remains elusive. Efforts on this front have tended to fall into one of two camps. On one side are those who seek to identify a universal cerebellar transform, a single algorithm that can be applied across domains as diverse as sensorimotor learning, social cognition, and decision making. On the other side are those who focus on functional specializations tailored for different task domains. In this perspective, we propose an integrated approach, one that recognizes functional specialization across different cerebellar subregions, but also builds on common constraints that help define the conditions that engage the cerebellum. Drawing on recurring principles from the cerebellum's well-established role in motor control, we identify three core constraints: Prediction - the cerebellum performs anticipatory, not reactive, computations; Timescale - the cerebellum generates predictions limited to short intervals; and Continuity - the cerebellum transforms continuous representations such as space and time. Together, these constraints define the boundary conditions underlying when and how the cerebellum supports cognition, and, just as importantly, specify the types of computations that should not depend on the cerebellum.
This study introduces a novel, flexible, and implantable neural probe using a cost-effective microfabrication process based on a thin polyimide film. Polyimide film, known as Kapton, serves as a flexible substrate for microelectrodes, conductive tracks, and contact pads of the probe, which are made from a thin film of gold (Au). SU-8 is used to cover the corresponding tracks for electrical isolation and to increase the stiffness of the probe for better implantation. To evaluate the performance of the fabricated probe, electrochemical impedance spectroscopy (EIS) and artificial neural signal recording have been used to characterize its properties. The microelectrode dimensions have been carefully chosen to provide low impedance characteristics, which are necessary for acquiring local field potential (LFP) signals. The in vivo LFP data have been obtained from a male zebra finch presented with auditory stimuli. By properly filtering the extracellular recordings and analyzing the data, the obtained results have been validated by comparing them with the signals acquired with a commercial neural electrode. Due to the use of Kapton, SU-8, and Au materials with non-toxic and adaptable properties in the body environment, the fabricated neural probe is considered a promising biocompatible implantable neural probe that may pave the way for the fabrication of other neural implantable devices with commercial aims.
We introduce LITcoder, an open-source library for building and benchmarking neural encoding models. Designed as a flexible backend, LITcoder provides standardized tools for aligning continuous stimuli (e.g., text and speech) with brain data, transforming stimuli into representational features, mapping those features onto brain data, and evaluating the predictive performance of the resulting model on held-out data. The library implements a modular pipeline covering a wide array of methodological design choices, so researchers can easily compose, compare, and extend encoding models without reinventing core infrastructure. Such choices include brain datasets, brain regions, stimulus feature (both neural-net-based and control, such as word rate), downsampling approaches, and many others. In addition, the library provides built-in logging, plotting, and seamless integration with experiment tracking platforms such as Weights & Biases (W&B). We demonstrate the scalability and versatility of our framework by fitting a range of encoding models to three story listening datasets: LeBel et al. (2023), Narratives, and Little Prince. We also explore the methodological choices critical for building encoding models for continuous fMRI data, illustrating the importance of accounting for all tokens in a TR scan (as opposed to just taking the last one, even when contextualized), incorporating hemodynamic lag effects, using train-test splits that minimize information leakage, and accounting for head motion effects on encoding model predictivity. Overall, LITcoder lowers technical barriers to encoding model implementation, facilitates systematic comparisons across models and datasets, fosters methodological rigor, and accelerates the development of high-quality high-performance predictive models of brain activity. Project page: https://litcoder-brain.github.io
Attention Deficit Hyperactivity Disorder (ADHD) is a common brain disorder in children that can persist into adulthood, affecting social, academic, and career life. Early diagnosis is crucial for managing these impacts on patients and the healthcare system but is often labor-intensive and time-consuming. This paper presents a novel method to improve ADHD diagnosis precision and timeliness by leveraging Deep Learning (DL) approaches and electroencephalogram (EEG) signals. We introduce ADHDeepNet, a DL model that utilizes comprehensive temporal-spatial characterization, attention modules, and explainability techniques optimized for EEG signals. ADHDeepNet integrates feature extraction and refinement processes to enhance ADHD diagnosis. The model was trained and validated on a dataset of 121 participants (61 ADHD, 60 Healthy Controls), employing nested cross-validation for robust performance. The proposed two-stage methodology uses a 10-fold cross-subject validation strategy. Initially, each iteration optimizes the model's hyper-parameters with inner 2-fold cross-validation. Then, Additive Gaussian Noise (AGN) with various standard deviations and magnification levels is applied for data augmentation. ADHDeepNet achieved 100% sensitivity and 99.17% accuracy in classifying ADHD/HC subjects. To clarify model explainability and identify key brain regions and frequency bands for ADHD diagnosis, we analyzed the learned weights and activation patterns of the model's primary layers. Additionally, t-distributed Stochastic Neighbor Embedding (t-SNE) visualized high-dimensional data, aiding in interpreting the model's decisions. This study highlights the potential of DL and EEG in enhancing ADHD diagnosis accuracy and efficiency.
Accurate forecasting of individualized, high-resolution cortical thickness (CTh) trajectories is essential for detecting subtle cortical changes, providing invaluable insights into neurodegenerative processes and facilitating earlier and more precise intervention strategies. However, CTh forecasting is a challenging task due to the intricate non-Euclidean geometry of the cerebral cortex and the need to integrate multi-modal data for subject-specific predictions. To address these challenges, we introduce the Spherical Brownian Bridge Diffusion Model (SBDM). Specifically, we propose a bidirectional conditional Brownian bridge diffusion process to forecast CTh trajectories at the vertex level of registered cortical surfaces. Our technical contribution includes a new denoising model, the conditional spherical U-Net (CoS-UNet), which combines spherical convolutions and dense cross-attention to integrate cortical surfaces and tabular conditions seamlessly. Compared to previous approaches, SBDM achieves significantly reduced prediction errors, as demonstrated by our experiments based on longitudinal datasets from the ADNI and OASIS. Additionally, we demonstrate SBDM's ability to generate individual factual and counterfactual CTh trajectories, offering a novel framework for exploring hypothetical scenarios of cortical development.
Artifacts in electroencephalography (EEG) -- muscle, eye movement, electrode, chewing, and shiver -- confound automated analysis yet are costly to label at scale. We study whether modern generative models can synthesize realistic, label-aware artifact segments suitable for augmentation and stress-testing. Using the TUH EEG Artifact (TUAR) corpus, we curate subject-wise splits and fixed-length multi-channel windows (e.g., 250 samples) with preprocessing tailored to each model (per-window min-max for adversarial training; per-recording/channel $z$-score for diffusion). We compare a conditional WGAN-GP with a projection discriminator to a 1D denoising diffusion model with classifier-free guidance, and evaluate along three axes: (i) fidelity via Welch band-power deltas ($\Delta\delta,\ \Delta\theta,\ \Delta\alpha,\ \Delta\beta$), channel-covariance Frobenius distance, autocorrelation $L_2$, and distributional metrics (MMD/PRD); (ii) specificity via class-conditional recovery with lightweight $k$NN/classifiers; and (iii) utility via augmentation effects on artifact recognition. In our setting, WGAN-GP achieves closer spectral alignment and lower MMD to real data, while both models exhibit weak class-conditional recovery, limiting immediate augmentation gains and revealing opportunities for stronger conditioning and coverage. We release a reproducible pipeline -- data manifests, training configurations, and evaluation scripts -- to establish a baseline for EEG artifact synthesis and to surface actionable failure modes for future work.
Multiscale modelling presents a multifaceted perspective into understanding the mechanisms of the brain and how neurodegenerative disorders like Parkinson's disease (PD) manifest and evolve over time. In this study, we propose a novel co-simulation multiscale approach that unifies both micro- and macroscales to more rigorously capture brain dynamics. The presented design considers the electrodiffusive activity across the brain and in the network defined by the cortex, basal ganglia, and thalamus that is implicated in the mechanics of PD, as well as the contribution of presynaptic inputs in the highlighted regions. The application of DBS and its effects, along with the inclusion of stochastic noise are also examined. We found that the thalamus exhibits large, fluctuating spiking in both the deterministic and stochastic conditions, suggesting that noise contributes primarily to neural variability, rather than driving the overall spiking activity. Ultimately, this work intends to provide greater insights into the dynamics of PD and the brain which can eventually be converted into clinical use.