Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Gene set analysis (GSA) is a foundational approach for interpreting genomic data of diseases by linking genes to biological processes. However, conventional GSA methods overlook clinical context of the analyses, often generating long lists of enriched pathways with redundant, nonspecific, or irrelevant results. Interpreting these requires extensive, ad-hoc manual effort, reducing both reliability and reproducibility. To address this limitation, we introduce cGSA, a novel AI-driven framework that enhances GSA by incorporating context-aware pathway prioritization. cGSA integrates gene cluster detection, enrichment analysis, and large language models to identify pathways that are not only statistically significant but also biologically meaningful. Benchmarking on 102 manually curated gene sets across 19 diseases and ten disease-related biological mechanisms shows that cGSA outperforms baseline methods by over 30%, with expert validation confirming its increased precision and interpretability. Two independent case studies in melanoma and breast cancer further demonstrate its potential to uncover context-specific insights and support targeted hypothesis generation.
Learning on small data is a challenge frequently encountered in many real-world applications. In this work we study how effective quantum ensemble models are when trained on small data problems in healthcare and life sciences. We constructed multiple types of quantum ensembles for binary classification using up to 26 qubits in simulation and 56 qubits on quantum hardware. Our ensemble designs use minimal trainable parameters but require long-range connections between qubits. We tested these quantum ensembles on synthetic datasets and gene expression data from renal cell carcinoma patients with the task of predicting patient response to immunotherapy. From the performance observed in simulation and initial hardware experiments, we demonstrate how quantum embedding structure affects performance and discuss how to extract informative features and build models that can learn and generalize effectively. We present these exploratory results in order to assist other researchers in the design of effective learning on small data using ensembles. Incorporating quantum computing in these data constrained problems offers hope for a wide range of studies in healthcare and life sciences where biological samples are relatively scarce given the feature space to be explored.
Natural Language Processing (NLP) has transformed various fields beyond linguistics by applying techniques originally developed for human language to the analysis of biological sequences. This review explores the application of NLP methods to biological sequence data, focusing on genomics, transcriptomics, and proteomics. We examine how various NLP methods, from classic approaches like word2vec to advanced models employing transformers and hyena operators, are being adapted to analyze DNA, RNA, protein sequences, and entire genomes. The review also examines tokenization strategies and model architectures, evaluating their strengths, limitations, and suitability for different biological tasks. We further cover recent advances in NLP applications for biological data, such as structure prediction, gene expression, and evolutionary analysis, highlighting the potential of these methods for extracting meaningful insights from large-scale genomic data. As language models continue to advance, their integration into bioinformatics holds immense promise for advancing our understanding of biological processes in all domains of life.
Modern single-cell datasets now comprise hundreds of millions of cells, presenting significant challenges for training deep learning models that require shuffled, memory-efficient data loading. While the AnnData format is the community standard for storing single-cell datasets, existing data loading solutions for AnnData are often inadequate: some require loading all data into memory, others convert to dense formats that increase storage demands, and many are hampered by slow random disk access. We present scDataset, a PyTorch IterableDataset that operates directly on one or more AnnData files without the need for format conversion. The core innovation is a combination of block sampling and batched fetching, which together balance randomness and I/O efficiency. On the Tahoe 100M dataset, scDataset achieves up to a 48$\times$ speed-up over AnnLoader, a 27$\times$ speed-up over HuggingFace Datasets, and an 18$\times$ speed-up over BioNeMo in single-core settings. These advances democratize large-scale single-cell model training for the broader research community.
Inspired by the success of unsupervised pre-training paradigms, researchers have applied these approaches to DNA pre-training. However, we argue that these approaches alone yield suboptimal results because pure DNA sequences lack sufficient information, since their functions are regulated by genomic profiles like chromatin accessibility. Here, we demonstrate that supervised training for genomic profile prediction serves as a more effective alternative to pure sequence pre-training. Furthermore, considering the multi-species and multi-profile nature of genomic profile prediction, we introduce our $\textbf{S}$pecies-$\textbf{P}$rofile $\textbf{A}$daptive $\textbf{C}$ollaborative $\textbf{E}$xperts (SPACE) that leverages Mixture of Experts (MoE) to better capture the relationships between DNA sequences across different species and genomic profiles, thereby learning more effective DNA representations. Through extensive experiments across various tasks, our model achieves state-of-the-art performance, establishing that DNA models trained with supervised genomic profiles serve as powerful DNA representation learners. The code is available at https://github.com/ZhuJiwei111/SPACE.
Climate change is a major threat to crop potential and is characterized by both long-term shifts in temperature and precipitation patterns as well as increased occurrence of extreme weather events, these extreme weather events are the most immediate and intractable threat to agriculture. Crop resilience in the face of stress depends upon the speed and effectiveness with which plants and cropping systems sense and respond to that stress. A variety of agronomic practices including breeding, exogenous inputs (nutrients, water, biostimulants and others) and shifts in cultivation practice have been used to influence plant stress response to achieve the goal of increased plant and cropping system resilience. Traditional breeding is a powerful tool that has resulted in stable and long-term cultivar improvements but is often too slow and complex to meet the diverse, complex and unpredictable challenges of climate induced stresses. Increased inputs (water, nutrients, pesticides etc.) and management strategies (cropping system choice, soil management etc.) can alleviate stress but are often constrained by cost and availability of inputs. Exogenous biostimulants, microbials and plant hormones have shown great promise as mechanisms to optimize natural plant resilience resulting in immediate but non-permanent improvements in plant responses to climate induced stresses. The failure to modernize regulatory frameworks for the use of biostimulants in agriculture will constrain the development of safe effective tools and deprive growers of means to respond to the vagaries of climate change. Here we discuss the scientific rationale for eliminating the regulatory barriers that constrain the potential for biostimulants or products that modulate plant regulatory networks to address climate change challenges and propose a framework for enabling legislation to strengthen cropping system resilience.
Multiplexed immunofluorescence microscopy captures detailed measurements of spatially resolved, multiple biomarkers simultaneously, revealing tissue composition and cellular interactions in situ among single cells. The growing scale and dimensional complexity of these datasets demand reproducible, comprehensive and user-friendly computational tools. To address this need, we developed SPAC (SPAtial single-Cell analysis), a Python-based package and a corresponding shiny application within an integrated, modular SPAC ecosystem (Liu et al., 2025) designed specifically for biologists without extensive coding expertise. Following image segmentation and extraction of spatially resolved single-cell data, SPAC streamlines downstream phenotyping and spatial analysis, facilitating characterization of cellular heterogeneity and spatial organization within tissues. Through scalable performance, specialized spatial statistics, highly customizable visualizations, and seamless workflows from dataset to insights, SPAC significantly lowers barriers to sophisticated spatial analyses.
Recent studies have shown that integrating multimodal data fusion techniques for imaging and genetic features is beneficial for the etiological analysis and predictive diagnosis of Alzheimer's disease (AD). However, there are several critical flaws in current deep learning methods. Firstly, there has been insufficient discussion and exploration regarding the selection and encoding of genetic information. Secondly, due to the significantly superior classification value of AD imaging features compared to genetic features, many studies in multimodal fusion emphasize the strengths of imaging features, actively mitigating the influence of weaker features, thereby diminishing the learning of the unique value of genetic features. To address this issue, this study proposes the dynamic multimodal role-swapping network (GenDMR). In GenDMR, we develop a novel approach to encode the spatial organization of single nucleotide polymorphisms (SNPs), enhancing the representation of their genomic context. Additionally, to adaptively quantify the disease risk of SNPs and brain region, we propose a multi-instance attention module to enhance model interpretability. Furthermore, we introduce a dominant modality selection module and a contrastive self-distillation module, combining them to achieve a dynamic teacher-student role exchange mechanism based on dominant and auxiliary modalities for bidirectional co-updating of different modal data. Finally, GenDMR achieves state-of-the-art performance on the ADNI public dataset and visualizes attention to different SNPs, focusing on confirming 12 potential high-risk genes related to AD, including the most classic APOE and recently highlighted significant risk genes. This demonstrates GenDMR's interpretable analytical capability in exploring AD genetic features, providing new insights and perspectives for the development of multimodal data fusion techniques.
Alzheimer's disease (AD) dementia is the most common form of dementia. With the emergence of disease-modifying therapies, predicting disease risk before symptom onset has become critical. We introduce DuAL-Net, a hybrid deep learning framework for AD dementia prediction using whole genome sequencing (WGS) data. DuAL-Net integrates two components: local probability modeling, which segments the genome into non-overlapping windows, and global annotation-based modeling, which annotates SNPs and reorganizes WGS input to capture long-range functional relationships. Both employ out-of-fold stacking with TabNet and Random Forest classifiers. Final predictions combine local and global probabilities using an optimized weighting parameter alpha. We analyzed WGS data from 1,050 individuals (443 cognitively normal, 607 AD dementia) using five-fold cross-validation. DuAL-Net achieved an AUC of 0.671 using top-ranked SNPs, representing 35.0% and 20.3% higher performance than bottom-ranked and randomly selected SNPs, respectively. ROC analysis demonstrated strong positive correlation between SNP prioritization rank and predictive power. The model identified known AD-associated SNPs as top contributors alongside potentially novel variants. DuAL-Net presents a promising framework improving both predictive accuracy and biological interpretability. The framework and web implementation offer an accessible platform for broader research applications.
INTRODUCTION: Alzheimer's disease (AD) is genetically complex, complicating robust classification from genomic data. METHODS: We developed a transformer-based ensemble model (TrUE-Net) using Monte Carlo Dropout for uncertainty estimation in AD classification from whole-genome sequencing (WGS). We combined a transformer that preserves single-nucleotide polymorphism (SNP) sequence structure with a concurrent random forest using flattened genotypes. An uncertainty threshold separated samples into an uncertain (high-variance) group and a more certain (low-variance) group. RESULTS: We analyzed 1050 individuals, holding out half for testing. Overall accuracy and area under the receiver operating characteristic (ROC) curve (AUC) were 0.6514 and 0.6636, respectively. Excluding the uncertain group improved accuracy from 0.6263 to 0.7287 (10.24% increase) and F1 from 0.5843 to 0.8205 (23.62% increase). DISCUSSION: Monte Carlo Dropout-driven uncertainty helps identify ambiguous cases that may require further clinical evaluation, thus improving reliability in AD genomic classification.
Low-cost, high-throughput DNA and RNA sequencing (HTS) data is the main workforce for the life sciences. Genome sequencing is now becoming a part of Predictive, Preventive, Personalized, and Participatory (termed 'P4') medicine. All genomic data are currently processed in energy-hungry computer clusters and centers, necessitating data transfer, consuming substantial energy, and wasting valuable time. Therefore, there is a need for fast, energy-efficient, and cost-efficient technologies that enable genomics research without requiring data centers and cloud platforms. We recently started the BioPIM Project to leverage the emerging processing-in-memory (PIM) technologies to enable energy and cost-efficient analysis of bioinformatics workloads. The BioPIM Project focuses on co-designing algorithms and data structures commonly used in genomics with several PIM architectures for the highest cost, energy, and time savings benefit.
Single-cell RNA sequencing (scRNA-seq) has revolutionized our understanding of cellular processes by enabling gene expression analysis at the individual cell level. Clustering allows for the identification of cell types and the further discovery of intrinsic patterns in single-cell data. However, the high dimensionality and sparsity of scRNA-seq data continue to challenge existing clustering models. In this paper, we introduce JojoSCL, a novel self-supervised contrastive learning framework for scRNA-seq clustering. By incorporating a shrinkage estimator based on hierarchical Bayesian estimation, which adjusts gene expression estimates towards more reliable cluster centroids to reduce intra-cluster dispersion, and optimized using Stein's Unbiased Risk Estimate (SURE), JojoSCL refines both instance-level and cluster-level contrastive learning. Experiments on ten scRNA-seq datasets substantiate that JojoSCL consistently outperforms prevalent clustering methods, with further validation of its practicality through robustness analysis and ablation studies. JojoSCL's code is available at: https://github.com/ziwenwang28/JojoSCL.
Cell-free DNA (cfDNA) analysis is a powerful, minimally invasive tool for monitoring disease progression, treatment response, and early detection. A major challenge, however, is accurately determining the tissue of origin, especially in complex or heterogeneous disease contexts. To address this, we developed a machine learning framework that leverages tissue-specific DNA methylation signatures to classify both tissue and disease origin from cfDNA data. Our model integrates methylation datasets across diverse epigenomic platforms, including Whole Genome Bisulfite Sequencing (WGBS), Illumina Infinium Bead Arrays, and Enzymatic Methyl-seq (EM-seq). To account for platform variability and data sparsity, we applied imputation strategies and harmonized CpG features to enable cross-platform learning. Dimensionality reduction revealed clear tissue-specific clustering of methylation profiles. A random forest classifier trained on these features achieved consistent classification performance (accuracy 0.75-0.8 across test sets and platforms). Notably, our model distinguished clinically relevant tissues such as inflamed synovium and peripheral blood mononuclear cells (PBMCs) in arthritis patients and deconvoluted synthetic cfDNA mixtures mimicking real-world liquid biopsy samples. The predicted tissue proportions closely matched the true values, demonstrating the model's potential for both classification and quantitative inference. These results support the feasibility of using cross-platform methylation data and machine learning for scalable, generalizable cfDNA diagnostics and lay the groundwork for future integration of disease-specific epigenetic features to guide clinical decision-making in precision medicine.
Accurately predicting gene mutations, mutation subtypes and their exons in lung cancer is critical for personalized treatment planning and prognostic assessment. Faced with regional disparities in medical resources and the high cost of genomic assays, using artificial intelligence to infer these mutations and exon variants from routine histopathology images could greatly facilitate precision therapy. Although some prior studies have shown that deep learning can accelerate the prediction of key gene mutations from lung cancer pathology slides, their performance remains suboptimal and has so far been limited mainly to early screening tasks. To address these limitations, we have assembled PathGene, which comprises histopathology images paired with next-generation sequencing reports from 1,576 patients at the Second Xiangya Hospital, Central South University, and 448 TCGA-LUAD patients. This multi-center dataset links whole-slide images to driver gene mutation status, mutation subtypes, exon, and tumor mutational burden (TMB) status, with the goal of leveraging pathology images to predict mutations, subtypes, exon locations, and TMB for early genetic screening and to advance precision oncology. Unlike existing datasets, we provide molecular-level information related to histopathology images in PathGene to facilitate the development of biomarker prediction models. We benchmarked 11 multiple-instance learning methods on PathGene for mutation, subtype, exon, and TMB prediction tasks. These experimental methods provide valuable alternatives for early genetic screening of lung cancer patients and assisting clinicians to quickly develop personalized precision targeted treatment plans for patients. Code and data are available at https://github.com/panliangrui/NIPS2025/.
Background: Platelet proteomics offers valuable insights for clinical research, yet isolating high-purity platelets remains a challenge. Current methods often lead to contamination or platelet loss, compromising data quality and reproducibility. Objectives: This study aimed to optimize a platelet isolation technique that yields high-purity samples with minimal loss and to identify the most effective mass spectrometry-based proteomic method for analyzing platelet proteins with optimal coverage and sensitivity. Methods: We refined an isolation protocol by adjusting centrifugation time to reduce blood volume requirements while preserving platelet yield and purity. Using this optimized method, we evaluated three proteomic approaches: Label-free Quantification with Data-Independent Acquisition (LFQ-DIA), Label-free Quantification with Data-Dependent Acquisition (LFQ-DDA), and Tandem Mass Tag labeling with DDA (TMT-DDA). Results: LFQ-DIA demonstrated superior protein coverage and sensitivity compared to LFQ-DDA and TMT-DDA. The refined isolation protocol effectively minimized contamination and platelet loss. Additionally, age-related differences in platelet protein composition were observed, highlighting the importance of using age-matched controls in biomarker discovery studies. Conclusions: The optimized platelet isolation protocol provides a cost-effective and reliable method for preparing high-purity samples for proteomics. LFQ-DIA is the most suitable approach for comprehensive platelet protein analysis. Age-related variation in platelet proteomes underscores the need for demographic matching in clinical proteomic research.
Potato functional genomics lags due to unsystematic gene information curation, gene identifier inconsistencies across reference genome versions, and the increasing volume of research publications. To address these limitations, we developed the Potato Knowledge Hub (http://www.potato-ai.top), leveraging Large Language Models (LLMs) and a systematically curated collection of over 3,200 high-quality potato research papers spanning over 120 years. This platform integrates two key modules: a functional gene database containing 2,571 literature-reported genes, meticulously mapped to the latest DMv8.1 reference genome with resolved nomenclature discrepancies and links to original publications; and a potato knowledge base. The knowledge base, built using a Retrieval-Augmented Generation (RAG) architecture, accurately answers research queries with literature citations, mitigating LLM "hallucination." Users can interact with the hub via a natural language AI agent, "Potato Research Assistant," for querying specialized knowledge, retrieving gene information, and extracting sequences. The continuously updated Potato Knowledge Hub aims to be a comprehensive resource, fostering advancements in potato functional genomics and supporting breeding programs.
Topologically Associating Chromatic Domains are spatially distinct chromatin regions that regulate transcription by segregating active and inactive genomic elements. Empirical studies show that their formation correlates with local patterns of epigenetic markers, yet the precise mechanisms linking 1D epigenetic landscapes to 3D chromatin folding remain unclear. Recent models represent chromatin as a spin system, where nucleosomes are treated as discrete-state variables coupled by interaction strengths derived from genomic and epigenomic data. Classical samplers struggle with these models due to high frustration and dense couplings. Here, we present a quantum annealing (QA) approach to efficiently sample chromatin states, embedding an epigenetic Ising model into the topology of D-Wave quantum processors.
DNA, encoding genetic instructions for almost all living organisms, fuels groundbreaking advances in genomics and synthetic biology. Recently, DNA Foundation Models have achieved success in designing synthetic functional DNA sequences, even whole genomes, but their susceptibility to jailbreaking remains underexplored, leading to potential concern of generating harmful sequences such as pathogens or toxin-producing genes. In this paper, we introduce GeneBreaker, the first framework to systematically evaluate jailbreak vulnerabilities of DNA foundation models. GeneBreaker employs (1) an LLM agent with customized bioinformatic tools to design high-homology, non-pathogenic jailbreaking prompts, (2) beam search guided by PathoLM and log-probability heuristics to steer generation toward pathogen-like sequences, and (3) a BLAST-based evaluation pipeline against a curated Human Pathogen Database (JailbreakDNABench) to detect successful jailbreaks. Evaluated on our JailbreakDNABench, GeneBreaker successfully jailbreaks the latest Evo series models across 6 viral categories consistently (up to 60\% Attack Success Rate for Evo2-40B). Further case studies on SARS-CoV-2 spike protein and HIV-1 envelope protein demonstrate the sequence and structural fidelity of jailbreak output, while evolutionary modeling of SARS-CoV-2 underscores biosecurity risks. Our findings also reveal that scaling DNA foundation models amplifies dual-use risks, motivating enhanced safety alignment and tracing mechanisms. Our code is at https://github.com/zaixizhang/GeneBreaker.