Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Reservoir computers (RCs) provide a computationally efficient alternative to deep learning while also offering a framework for incorporating brain-inspired computational principles. By using an internal neural network with random, fixed connections$-$the 'reservoir'$-$and training only the output weights, RCs simplify the training process but remain sensitive to the choice of hyperparameters that govern activation functions and network architecture. Moreover, typical RC implementations overlook a critical aspect of neuronal dynamics: the balance between excitatory and inhibitory (E-I) signals, which is essential for robust brain function. We show that RCs characteristically perform best in balanced or slightly over-inhibited regimes, outperforming excitation-dominated ones. To reduce the need for precise hyperparameter tuning, we introduce a self-adapting mechanism that locally adjusts E/I balance to achieve target neuronal firing rates, improving performance by up to 130% in tasks like memory capacity and time series prediction compared with globally tuned RCs. Incorporating brain-inspired heterogeneity in target neuronal firing rates further reduces the need for fine-tuning hyperparameters and enables RCs to excel across linear and non-linear tasks. These results support a shift from static optimization to dynamic adaptation in reservoir design, demonstrating how brain-inspired mechanisms improve RC performance and robustness while deepening our understanding of neural computation.
Animals' internal states reflect variables like their position in space, orientation, decisions, and motor actions -- but how should these internal states be arranged? Internal states which frequently transition between one another should be close enough that transitions can happen quickly, but not so close that neural noise significantly impacts the stability of those states, and how reliably they can be encoded and decoded. In this paper, we study the problem of striking a balance between these two concerns, which we call an `optimal packing' problem since it resembles mathematical problems like sphere packing. While this problem is generally extremely difficult, we show that symmetries in environmental transition statistics imply certain symmetries of the optimal neural representations, which allows us in some cases to exactly solve for the optimal state arrangement. We focus on two toy cases: uniform transition statistics, and cyclic transition statistics. Code is available at https://github.com/john-vastola/optimal-packing-neurreps23 .
Information processing in the brain is coordinated by the dynamic activity of neurons and neural populations at a range of spatiotemporal scales. These dynamics, captured in the form of electrophysiological recordings and neuroimaging, show evidence of time-irreversibility and broken detailed balance suggesting that the brain operates in a nonequilibrium stationary state. Furthermore, the level of nonequilibrium, measured by entropy production or irreversibility appears to be a crucial signature of cognitive complexity and consciousness. The subsequent study of neural dynamics from the perspective of nonequilibrium statistical physics is an emergent field that challenges the assumptions of symmetry and maximum-entropy that are common in traditional models. In this review, we discuss the plethora of exciting results emerging at the interface of nonequilibrium dynamics and neuroscience. We begin with an introduction to the mathematical paradigms necessary to understand nonequilibrium dynamics in both continuous and discrete state-spaces. Next, we review both model-free and model-based approaches to analysing nonequilibrium dynamics in both continuous-state recordings and neural spike-trains, as well as the results of such analyses. We briefly consider the topic of nonequilibrium computation in neural systems, before concluding with a discussion and outlook on the field.
To the best of our knowledge, all existing methods that can generate synthetic brain magnetic resonance imaging (MRI) scans for a specific individual require detailed structural or volumetric information about the individual's brain. However, such brain information is often scarce, expensive, and difficult to obtain. In this paper, we propose the first approach capable of generating synthetic brain MRI segmentations -- specifically, 3D white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) segmentations -- for individuals using their easily obtainable and often readily available demographic, interview, and cognitive test information. Our approach features a novel deep generative model, CSegSynth, which outperforms existing prominent generative models, including conditional variational autoencoder (C-VAE), conditional generative adversarial network (C-GAN), and conditional latent diffusion model (C-LDM). We demonstrate the high quality of our synthetic segmentations through extensive evaluations. Also, in assessing the effectiveness of the individual-specific generation, we achieve superior volume prediction, with Pearson correlation coefficients reaching 0.80, 0.82, and 0.70 between the ground-truth WM, GM, and CSF volumes of test individuals and those volumes predicted based on generated individual-specific segmentations, respectively.
Early dementia diagnosis requires biomarkers sensitive to both structural and functional brain changes. While structural neuroimaging biomarkers have progressed significantly, objective functional biomarkers of early cognitive decline remain a critical unmet need. Current cognitive assessments often rely on behavioral responses, making them susceptible to factors like effort, practice effects, and educational background, thereby hindering early and accurate detection. This work introduces a novel approach, leveraging a lightweight convolutional neural network (CNN) to infer cognitive impairment levels directly from electroencephalography (EEG) data. Critically, this method employs a passive fast periodic visual stimulation (FPVS) paradigm, eliminating the need for explicit behavioral responses or task comprehension from the participant. This passive approach provides an objective measure of working memory function, independent of confounding factors inherent in active cognitive tasks, and offers a promising new avenue for early and unbiased detection of cognitive decline.
Music is a universal feature of human culture, linked to embodied cognitive functions that drive learning, action, and the emergence of creativity and individuality. Evidence highlights the critical role of statistical learning an implicit cognitive process of the brain in musical creativity and individuality. Despite its significance, the precise neural and computational mechanisms underpinning these dynamic and embodied cognitive processes re-main poorly understood. This paper discusses how individuality and creativity emerge within the framework of the brain's statistical learning, drawing on a series of neural and computational studies. This work offers perspectives on the mechanisms driving the heterogeneous nature of statistical learning abilities and embodied mechanisms and provides a framework to explain the paradoxical phenomenon where individuals with specific cognitive traits that limit certain perceptual abilities excel in creative domains.
Simultaneous EEG-fMRI recordings are increasingly used to investigate brain activity by leveraging the complementary high spatial and high temporal resolution of fMRI and EEG signals respectively. It remains unclear, however, to what degree these two imaging modalities capture shared information about neural activity. Here, we investigate whether it is possible to predict both task-evoked and spontaneous fMRI signals of motor brain networks from EEG time-varying spectral power using interpretable models trained for individual subjects with Sparse Group Lasso regularization. Critically, we test the trained models on data acquired from each subject on a different day and obtain statistical validation by comparison with appropriate null models as well as the conventional EEG sensorimotor rhythm. We find significant prediction results in most subjects, although less frequently for resting-state compared to task-based conditions. Furthermore, we interpret the model learned parameters to understand representations of EEG-fMRI coupling in terms of predictive EEG channels, frequencies, and haemodynamic delays. In conclusion, our work provides evidence of the ability to predict fMRI motor brain activity from EEG recordings alone across different days, in both task-evoked and spontaneous conditions, with statistical significance in individual subjects. These results present great potential for translation to EEG neurofeedback applications.
Stick insect stepping patterns have been studied for insights about locomotor rhythm generation and control, because the underlying neural system is relatively accessible experimentally and produces a variety of rhythmic outputs. Harnessing the experimental identification of effective interactions among neuronal units involved in stick insect stepping pattern generation, previous studies proposed computational models simulating aspects of stick insect locomotor activity. While these models generate diverse stepping patterns and transitions between them, there has not been an in-depth analysis of the mechanisms underlying their dynamics. In this study, we focus on modeling rhythm generation by the neurons associated with the protraction-retraction, levitation-depression, and extension-flexion antagonistic muscle pairs of the mesothoracic (middle) leg of stick insects. Our model features a reduced central pattern generator (CPG) circuit for each joint and includes synaptic interactions among the CPGs; we also consider extensions such as the inclusion of motoneuron pools controlled by the CPG components. The resulting network is described by an 18-dimensional system of ordinary differential equations. We use fast-slow decomposition, projection into interacting phase planes, and a heavy reliance on input-dependent nullclines to analyze this model. Specifically, we identify and elucidate dynamic mechanisms capable of generating a stepping rhythm, with a sequence of biologically constrained phase relationships, in a three-joint stick insect limb model. Furthermore, we explain the robustness to parameter changes and tunability of these patterns. In particular, the model allows us to identify possible mechanisms by which neuromodulatory and top-down effects could tune stepping pattern output frequency.
It has been discovered before (arXiv:2306.07676) that for the selectivity gain due to fluctuations in the process of primary odor reception by olfactory receptor neuron (ORN) there exists an optimal concentration of odors at which increased selectivity is mostly manifested. We estimate by means of numerical simulation what could be the gain value at that concentration by modeling ORN as a leaky integrate-and-fire neuron with membrane populated by receptor proteins R which bind and release odor molecules randomly. Each R is modeled as a ligand-gated ion channel, and binding-releasing is modeled as a Markov stochastic process. Possible values for the selectivity gain are calculated for ORN parameters suggested by experimental data. Keywords: ORN, selectivity, receptor proteins, fluctuations, stochastic process, Markov process
Many dynamical systems found in biology, ranging from genetic circuits to the human brain to human social systems, are inherently computational. Although extensive research has explored their resulting functions and behaviors, the underlying computations often remain elusive. Even the fundamental task of quantifying the \textit{amount} of computation performed by a dynamical system remains under-investigated. In this study we address this challenge by introducing a novel framework to estimate the amount of computation implemented by an arbitrary physical system based on empirical time-series of its dynamics. This framework works by forming a statistical reconstruction of that dynamics, and then defining the amount of computation in terms of both the complexity and fidelity of this reconstruction. We validate our framework by showing that it appropriately distinguishes the relative amount of computation across different regimes of Lorenz dynamics and various computation classes of cellular automata. We then apply this framework to neural activity in \textit{Caenorhabditis elegans}, as captured by calcium imaging. By analyzing time-series neural data obtained from the fluorescent intensity of the calcium indicator GCaMP, we find that high and low amounts of computation are required, respectively, in the neural dynamics of freely moving and immobile worms. Our analysis further sheds light on the amount of computation performed when the system is in various locomotion states. In sum, our study refines the definition of computational amount from time-series data and highlights neural computation in a simple organism across distinct behavioral states.
The study of irreducible higher-order interactions has become a core topic of study in complex systems. Two of the most well-developed frameworks, topological data analysis and multivariate information theory, aim to provide formal tools for identifying higher-order interactions in empirical data. Despite similar aims, however, these two approaches are built on markedly different mathematical foundations and have been developed largely in parallel. In this study, we present a head-to-head comparison of topological data analysis and information-theoretic approaches to describing higher-order interactions in multivariate data; with the aim of assessing the similarities and differences between how the frameworks define ``higher-order structures." We begin with toy examples with known topologies, before turning to naturalistic data: fMRI signals collected from the human brain. We find that intrinsic, higher-order synergistic information is associated with three-dimensional cavities in a point cloud: shapes such as spheres are synergy-dominated. In fMRI data, we find strong correlations between synergistic information and both the number and size of three-dimensional cavities. Furthermore, we find that dimensionality reduction techniques such as PCA preferentially represent higher-order redundancies, and largely fail to preserve both higher-order information and topological structure, suggesting that common manifold-based approaches to studying high-dimensional data are systematically failing to identify important features of the data. These results point towards the possibility of developing a rich theory of higher-order interactions that spans topological and information-theoretic approaches while simultaneously highlighting the profound limitations of more conventional methods.
In this study, we explore how the combination of synthetic biology, neuroscience modeling, and neuromorphic electronic systems offers a new approach to creating an artificial system that mimics the natural sense of smell. We argue that a co-design approach offers significant advantages in replicating the complex dynamics of odor sensing and processing. We investigate a hybrid system of synthetic sensory neurons that provides three key features: a) receptor-gated ion channels, b) interface between synthetic biology and semiconductors and c) event-based encoding and computing based on spiking networks. This research seeks to develop a platform for ultra-sensitive, specific, and energy-efficient odor detection, with potential implications for environmental monitoring, medical diagnostics, and security.
Understanding the sequence of cognitive operations that underlie decision-making is a fundamental challenge in cognitive neuroscience. Traditional approaches often rely on group-level statistics, which obscure trial-by-trial variations in cognitive strategies. In this study, we introduce a novel machine learning method that combines Hidden Multivariate Pattern analysis with a Structured State Space Sequence model to decode cognitive strategies from electroencephalography data at the trial level. We apply this method to a decision-making task, where participants were instructed to prioritize either speed or accuracy in their responses. Our results reveal an additional cognitive operation, labeled Confirmation, which seems to occur predominantly in the accuracy condition but also frequently in the speed condition. The modeled probability that this operation occurs is associated with higher probability of responding correctly as well as changes of mind, as indexed by electromyography data. By successfully modeling cognitive operations at the trial level, we provide empirical evidence for dynamic variability in decision strategies, challenging the assumption of homogeneous cognitive processes within experimental conditions. Our approach shows the potential of sequence modeling in cognitive neuroscience to capture trial-level variability that is obscured by aggregate analyses. The introduced method offers a new way to detect and understand cognitive strategies in a data-driven manner, with implications for both theoretical research and practical applications in many fields.
Existing evidence suggests that neural responses to errors were exaggerated in individuals at risk of depression and anxiety. This phenomenon has led to the possibility that the error-related negativity (ERN), a well-known neural correlate of error monitoring could be used as a diagnostic tool for several psychological disorders. However, conflicting evidence between psychopathology and the ERN suggests that this phenomenon is modulated by variables are yet to be identified. Socioeconomic status (SES) could potentially play a role in the relationship between the ERN and psychopathological disorders, given that SES is known to be associated with depression and anxiety. In the current study, we first tested whether SES was related to ERN amplitude. Second, we examined whether the relationship between the ERN and depression was explained by differences in SES. We measured error-related negativity (ERN) from a sample of adult participants from low to high socioeconomic backgrounds while controlling their depression scores. Results show that SES correlated with variations in ERN amplitude. Specifically, we found that low-SES individuals had a larger ERN than wealthier individuals. In addition, the relationship between depression and the ERN was fully accounted for by variations in SES. Overall, our results indicate that SES predicts neural responses to errors. Findings also indicate that the link between depression and ERN may be the result of SES variations. Future research examining the links between psychopathology and error monitoring should control SES differences, and caution is needed if they are to be used as a diagnostic tool in low-income communities.
This review synthesizes advances in predictive processing within the sensory cortex. Predictive processing theorizes that the brain continuously predicts sensory inputs, refining neuronal responses by highlighting prediction errors. We identify key computational primitives, such as stimulus adaptation, dendritic computation, excitatory/inhibitory balance and hierarchical processing, as central to this framework. Our review highlights convergences, such as top-down inputs and inhibitory interneurons shaping mismatch signals, and divergences, including species-specific hierarchies and modality-dependent layer roles. To address these conflicts, we propose experiments in mice and primates using in-vivo two-photon imaging and electrophysiological recordings to test whether temporal, motor, and omission mismatch stimuli engage shared or distinct mechanisms. The resulting dataset, collected and shared via the OpenScope program, will enable model validation and community analysis, fostering iterative refinement and refutability to decode the neural circuits of predictive processing.
Flexible modulation of temporal dynamics in neural sequences underlies many cognitive processes. For instance, we can adaptively change the speed of motor sequences and speech. While such flexibility is influenced by various factors such as attention and context, the common neural mechanisms responsible for this modulation remain poorly understood. We developed a biologically plausible neural network model that incorporates neurons with multiple timescales and Hebbian learning rules. This model is capable of generating simple sequential patterns as well as performing delayed match-to-sample (DMS) tasks that require the retention of stimulus identity. Fast neural dynamics establish metastable states, while slow neural dynamics maintain task-relevant information and modulate the stability of these states to enable temporal processing. We systematically analyzed how factors such as neuronal gain, external input strength (contextual cues), and task difficulty influence the temporal properties of neural activity sequences - specifically, dwell time within patterns and transition times between successive patterns. We found that these factors flexibly modulate the stability of metastable states. Our findings provide a unified mechanism for understanding various forms of temporal modulation and suggest a novel computational role for neural timescale diversity in dynamically adapting cognitive performance to changing environmental demands.
Our understanding of neural computation is founded on the assumption that neurons fire in response to a linear summation of inputs. Yet experiments demonstrate that some neurons are capable of complex computations that require interactions between inputs. Here we show, across multiple brain regions and species, that simple computations (without interactions between inputs) explain most of the variability in neuronal activity. Neurons are quantitatively described by models that capture the measured dependence on each input individually, but assume nothing about combinations of inputs. These minimal models, which are equivalent to binary artificial neurons, predict complex higher-order dependencies and recover known features of synaptic connectivity. The inferred computations are low-dimensional, indicating a highly redundant neural code that is necessary for error correction. These results suggest that, despite intricate biophysical details, most neurons perform simple computations typically reserved for artificial models.
Recent work has demonstrated that large-scale, multi-animal models are powerful tools for characterizing the relationship between neural activity and behavior. Current large-scale approaches, however, focus exclusively on either predicting neural activity from behavior (encoding) or predicting behavior from neural activity (decoding), limiting their ability to capture the bidirectional relationship between neural activity and behavior. To bridge this gap, we introduce a multimodal, multi-task model that enables simultaneous Neural Encoding and Decoding at Scale (NEDS). Central to our approach is a novel multi-task-masking strategy, which alternates between neural, behavioral, within-modality, and cross-modality masking. We pretrain our method on the International Brain Laboratory (IBL) repeated site dataset, which includes recordings from 83 animals performing the same visual decision-making task. In comparison to other large-scale models, we demonstrate that NEDS achieves state-of-the-art performance for both encoding and decoding when pretrained on multi-animal data and then fine-tuned on new animals. Surprisingly, NEDS's learned embeddings exhibit emergent properties: even without explicit training, they are highly predictive of the brain regions in each recording. Altogether, our approach is a step towards a foundation model of the brain that enables seamless translation between neural activity and behavior.