Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Digital pathology has emerged as a transformative approach to tissue analysis, offering unprecedented opportunities for objective, quantitative assessment of histopathological features. However, the complexity of implementing artificial intelligence (AI) solutions in pathology workflows has limited widespread adoption. Here we present ORCA (Optimized Research and Clinical Analytics), a comprehensive no-code AI platform specifically designed for digital pathology applications. ORCA addresses critical barriers to AI adoption by providing an intuitive interface that enables pathologists and researchers to train, deploy, and validate custom AI models without programming expertise. The platform integrates advanced deep learning architectures with clinical workflow management, supporting applications from tissue classification and cell segmentation to spatial distribution scoring and novel biomarker discovery. We demonstrate ORCA's capabilities through validation studies across multiple cancer types, showing significant improvements in analytical speed, reproducibility, and clinical correlation compared to traditional manual assessment methods. Our results indicate that ORCA successfully democratizes access to state-of-the-art AI tools in pathology, potentially accelerating biomarker discovery and enhancing precision medicine initiatives.
We present a probabilistic framework for modeling structured spatiotemporal dynamics from sparse observations, focusing on cardiac motion. Our approach integrates neural ordinary differential equations (NODEs), graph neural networks (GNNs), and neural processes into a unified model that captures uncertainty, temporal continuity, and anatomical structure. We represent dynamic systems as spatiotemporal multiplex graphs and model their latent trajectories using a GNN-parameterized vector field. Given the sparse context observations at node and edge levels, the model infers a distribution over latent initial states and control variables, enabling both interpolation and extrapolation of trajectories. We validate the method on three synthetic dynamical systems (coupled pendulum, Lorenz attractor, and Kuramoto oscillators) and two real-world cardiac imaging datasets - ACDC (N=150) and UK Biobank (N=526) - demonstrating accurate reconstruction, extrapolation, and disease classification capabilities. The model accurately reconstructs trajectories and extrapolates future cardiac cycles from a single observed cycle. It achieves state-of-the-art results on the ACDC classification task (up to 99% accuracy), and detects atrial fibrillation in UK Biobank subjects with competitive performance (up to 67% accuracy). This work introduces a flexible approach for analyzing cardiac motion and offers a foundation for graph-based learning in structured biomedical spatiotemporal time-series data.
Despite significant medical advancements, cancer remains the second leading cause of death, with over 600,000 deaths per year in the US. One emerging field, pathway analysis, is promising but still relies on manually derived wet lab data, which is time-consuming to acquire. This work proposes an efficient, effective end-to-end framework for Artificial Intelligence (AI) based pathway analysis that predicts both cancer severity and mutation progression, thus recommending possible treatments. The proposed technique involves a novel combination of time-series machine learning models and pathway analysis. First, mutation sequences were isolated from The Cancer Genome Atlas (TCGA) Database. Then, a novel preprocessing algorithm was used to filter key mutations by mutation frequency. This data was fed into a Recurrent Neural Network (RNN) that predicted cancer severity. Then, the model probabilistically used the RNN predictions, information from the preprocessing algorithm, and multiple drug-target databases to predict future mutations and recommend possible treatments. This framework achieved robust results and Receiver Operating Characteristic (ROC) curves (a key statistical metric) with accuracies greater than 60%, similar to existing cancer diagnostics. In addition, preprocessing played an instrumental role in isolating important mutations, demonstrating that each cancer stage studied may contain on the order of a few-hundred key driver mutations, consistent with current research. Heatmaps based on predicted gene frequency were also generated, highlighting key mutations in each cancer. Overall, this work is the first to propose an efficient, cost-effective end-to-end framework for projecting cancer progression and providing possible treatments without relying on expensive, time-consuming wet lab work.
Multimodal data provides heterogeneous information for a holistic understanding of the tumor microenvironment. However, existing AI models often struggle to harness the rich information within multimodal data and extract poorly generalizable representations. Here we present MICE (Multimodal data Integration via Collaborative Experts), a multimodal foundation model that effectively integrates pathology images, clinical reports, and genomics data for precise pan-cancer prognosis prediction. Instead of conventional multi-expert modules, MICE employs multiple functionally diverse experts to comprehensively capture both cross-cancer and cancer-specific insights. Leveraging data from 11,799 patients across 30 cancer types, we enhanced MICE's generalizability by coupling contrastive and supervised learning. MICE outperformed both unimodal and state-of-the-art multi-expert-based multimodal models, demonstrating substantial improvements in C-index ranging from 3.8% to 11.2% on internal cohorts and 5.8% to 8.8% on independent cohorts, respectively. Moreover, it exhibited remarkable data efficiency across diverse clinical scenarios. With its enhanced generalizability and data efficiency, MICE establishes an effective and scalable foundation for pan-cancer prognosis prediction, holding strong potential to personalize tailored therapies and improve treatment outcomes.
The ability of virus shells to encapsulate a wide range of functional cargoes, especially multiple cargoes - siRNAs, enzymes, and chromophores - has made them an essential tool in biotechnology for advancing drug delivery applications and developing innovative new materials. Here we present a mechanistic study of the processes and pathways that lead to multiple cargo encapsulation in the co-assembly of virus shell proteins with ligand-coated nanoparticles. Based on the structural identification of different intermediates, enabled by the contrast in electron microscopy provided by the metal nanoparticles that play the cargo role, we find that multiple cargo encapsulation occurs by self-assembly via a specific ``assembly line'' pathway that is different from previously described \emph{in vitro} assembly mechanisms of virus-like particles (VLP). The emerging model explains observations that are potentially important for delivery applications, for instance, the pronounced nanoparticle size selectivity.
The macroscopic (population-level) dynamics of chemotactic cell movement -- arising from underlying microscopic (individual-based) models -- are often described by parabolic partial differential equations (PDEs) governing the spatio-temporal evolution of cell concentrations. In certain cases, these macroscopic PDEs can be analytically derived from microscopic models, thereby elucidating the dependence of PDE coefficients on the parameters of the underlying individual-based dynamics. However, such analytical derivations are not always feasible, particularly for more complex or nonlinear microscopic models. In these instances, neural networks offer a promising alternative for estimating the coefficients of macroscopic PDEs directly from data generated by microscopic simulations. In this work, three microscopic models of chemotaxis are investigated. The macroscopic chemotaxis sensitivity is estimated using neural networks, thereby bridging the gap between individual-level behaviours and population-level descriptions. The results are compared with macroscopic PDEs, which can be derived for each model in certain parameter regimes.
TRPM4 is overexpressed in prostate cancer (PCa) associated with metastasis or recurrence. There is paucity of information pertaining to TRPM4 characterization and functions at single-cell level in PCa. In this study, generalized additive model (GAM) was utilized to model the relationship between TRPM4 and genes shortlisted using Spearman-Kendall dual-filter in aggressive PCa and benign prostate (BP) control cells derived from scRNA-seq dataset. Seven ribosomal genes (RPL10, RPL27, RPL28, RPS2, RPS8, RPS12, and RPS26; averaged into Ribo as the gene set), passed the dual-filter specifically in PCa cells. GAM modeling of TRPM4-Ribo significantly outperformed TRPM4 modeling with alternative cancer gene sets (GSK-3B, mTOR, NF-KB, PI3K/AKT, and Wnt). Cell explanatory power (CEP) classification was devised and verified by cross-validation to identify individual PCa cells most well-predicted by the model. CEP classification binarized PCa cells into top-ranked explanatory power (TREP; more well-predicted by the model) and non-TREP cells. In TRPM4-Ribo GAM plots, distribution pattern of TREP cells shifted at an inflection point (IP) i.e., the specific TRPM4 expression value that further binarized the plot into pre-IP (TRPM4 values below IP) and post-IP (TRPM4 values above IP) regions, producing a quadrant of TREP versus non-TREP cells for each PCa patient. Gene Ontology (GO) enrichment analysis showed that pre-IP TREP cells enriched for immune-related GOs, while post-IP TREP cells enriched for ribosomal, translation, and cell adhesion GOs. In conclusion, the CEP-IP framework based on pairwise genes produces quadrants of cancer cell subpopulations, enabling the identification of distinctive biology with potential therapeutic implications.
Drug repurposing has historically been an economically infeasible process for identifying novel uses for abandoned drugs. Modern machine learning has enabled the identification of complex biochemical intricacies in candidate drugs; however, many studies rely on simplified datasets with known drug-disease similarities. We propose a machine learning pipeline that uses unsupervised deep embedded clustering, combined with supervised graph neural network link prediction to identify new drug-disease links from multi-omic data. Unsupervised autoencoder and cluster training reduced the dimensionality of omic data into a compressed latent embedding. A total of 9,022 unique drugs were partitioned into 35 clusters with a mean silhouette score of 0.8550. Graph neural networks achieved strong statistical performance, with a prediction accuracy of 0.901, receiver operating characteristic area under the curve of 0.960, and F1-Score of 0.901. A ranked list comprised of 477 per-cluster link probabilities exceeding 99 percent was generated. This study could provide new drug-disease link prospects across unrelated disease domains, while advancing the understanding of machine learning in drug repurposing studies.
This article presents a novel microscopy image analysis framework designed for low-budget labs equipped with a standard CPU desktop. The Python-based program enables cytometric analysis of live, unstained cells in culture through an advanced computer vision and machine learning pipeline. Crucially, the framework operates on label-free data, requiring no manually annotated training data or training phase. It is accessible via a user-friendly, cross-platform GUI that requires no programming skills, while also providing a scripting interface for programmatic control and integration by developers. The end-to-end workflow performs semantic and instance segmentation, feature extraction, analysis, evaluation, and automated report generation. Its modular architecture supports easy maintenance and flexible integration while supporting both single-image and batch processing. Validated on several unstained cell types from the public dataset of livecells, the framework demonstrates superior accuracy and reproducibility compared to contemporary tools like Cellpose and StarDist. Its competitive segmentation speed on a CPU-based platform highlights its significant potential for basic research and clinical applications -- particularly in cell transplantation for personalized medicine and muscle regeneration therapies.
Moving animal groups consist of many distinct individuals but can operate and function as one unit when performing different tasks. Effectively evading unexpected predator attacks is one primary task for many moving groups. The current explanation for predator evasion responses in moving animal groups require the individuals in the groups interact via (velocity) alignment. However, experiments have shown that some animals do not use alignment. This suggests that another explanation for the predator evasion capacity in at least these species is needed. Here we establish that effective collective predator evasion does not require alignment, it can be induced via attraction and repulsion alone. We also show that speed differences between individuals that have directly observed the predator and those that have not influence evasion success and the speed of the collective evasion process, but are not required to induce the phenomenon. Our work here adds collective predator evasion to a number of phenomena previously thought to require alignment interactions that have recently been shown to emerge from attraction and repulsion alone. Based on our findings we suggest experiments and make predictions that may lead to a deeper understanding of not only collective predator evasion, but also collective motion in general.
Complex patterns emerge across a wide range of biological systems. While such patterns often exhibit remarkable robustness, variation and irregularity exist at multiple scales and can carry important information about the underlying agent interactions driving collective dynamics. Many methods for quantifying biological patterns focus on large-scale, characteristic features (such as stripe width or spot number), but questions remain on how to characterize messy patterns. In the case of cellular patterns that emerge during development or regeneration, understanding where patterns are most susceptible to variability may help shed light on cell behavior and the tissue environment. Motivated by these challenges, we introduce methods based on topological data analysis to classify and quantify messy patterns arising from agent-based interactions, by extracting meaningful biological interpretations from persistence barcode summaries. To compute persistent homology, our methods rely on a sweeping-plane filtration which, in comparison to the Vietoris--Rips filtration, is more rarely applied to biological systems. We demonstrate how results from the sweeping-plane filtration can be interpreted to quantify stripe patterns (with and without interruptions) by analyzing in silico zebrafish skin patterns, and we generate new quantitative predictions about which pattern features may be most robust or variable. Our work provides an automated framework for quantifying features and irregularities in spot and stripe patterns and highlights how different approaches to persistent homology can provide complementary insight into biological systems.
Three-dimensional X-ray histology techniques offer a non-invasive alternative to conventional 2D histology, enabling volumetric imaging of biological tissues without the need for physical sectioning or chemical staining. However, the inherent greyscale image contrast of X-ray tomography limits its biochemical specificity compared to traditional histological stains. Within digital pathology, deep learning-based virtual staining has demonstrated utility in simulating stained appearances from label-free optical images. In this study, we extend virtual staining to the X-ray domain by applying cross-modality image translation to generate artificially stained slices from synchrotron-radiation-based micro-CT scans. Using over 50 co-registered image pairs of micro-CT and toluidine blue-stained histology from bone-implant samples, we trained a modified CycleGAN network tailored for limited paired data. Whole slide histology images were downsampled to match the voxel size of the CT data, with on-the-fly data augmentation for patch-based training. The model incorporates pixelwise supervision and greyscale consistency terms, producing histologically realistic colour outputs while preserving high-resolution structural detail. Our method outperformed Pix2Pix and standard CycleGAN baselines across SSIM, PSNR, and LPIPS metrics. Once trained, the model can be applied to full CT volumes to generate virtually stained 3D datasets, enhancing interpretability without additional sample preparation. While features such as new bone formation were able to be reproduced, some variability in the depiction of implant degradation layers highlights the need for further training data and refinement. This work introduces virtual staining to 3D X-ray imaging and offers a scalable route for chemically informative, label-free tissue characterisation in biomedical research.
Large-scale single-cell and Perturb-seq investigations routinely involve clustering cells and subsequently annotating each cluster with Gene-Ontology (GO) terms to elucidate the underlying biological programs. However, both stages, resolution selection and functional annotation, are inherently subjective, relying on heuristics and expert curation. We present HYPOGENEAGENT, a large language model (LLM)-driven framework, transforming cluster annotation into a quantitatively optimizable task. Initially, an LLM functioning as a gene-set analyst analyzes the content of each gene program or perturbation module and generates a ranked list of GO-based hypotheses, accompanied by calibrated confidence scores. Subsequently, we embed every predicted description with a sentence-embedding model, compute pair-wise cosine similarities, and let the agent referee panel score (i) the internal consistency of the predictions, high average similarity within the same cluster, termed intra-cluster agreement (ii) their external distinctiveness, low similarity between clusters, termed inter-cluster separation. These two quantities are combined to produce an agent-derived resolution score, which is maximized when clusters exhibit simultaneous coherence and mutual exclusivity. When applied to a public K562 CRISPRi Perturb-seq dataset as a preliminary test, our Resolution Score selects clustering granularities that exhibit alignment with known pathway compared to classical metrics such silhouette score, modularity score for gene functional enrichment summary. These findings establish LLM agents as objective adjudicators of cluster resolution and functional annotation, thereby paving the way for fully automated, context-aware interpretation pipelines in single-cell multi-omics studies.
Background: Investigational New Drug (IND) application preparation is time-intensive and expertise-dependent, slowing early clinical development. Objective: To evaluate whether a large language model (LLM) platform (AutoIND) can reduce first-draft composition time while maintaining document quality in regulatory submissions. Methods: Drafting times for IND nonclinical written summaries (eCTD modules 2.6.2, 2.6.4, 2.6.6) generated by AutoIND were directly recorded. For comparison, manual drafting times for IND summaries previously cleared by the U.S. FDA were estimated from the experience of regulatory writers ($\geq$6 years) and used as industry-standard benchmarks. Quality was assessed by a blinded regulatory writing assessor using seven pre-specified categories: correctness, completeness, conciseness, consistency, clarity, redundancy, and emphasis. Each sub-criterion was scored 0-3 and normalized to a percentage. A critical regulatory error was defined as any misrepresentation or omission likely to alter regulatory interpretation (e.g., incorrect NOAEL, omission of mandatory GLP dose-formulation analysis). Results: AutoIND reduced initial drafting time by $\sim$97% (from $\sim$100 h to 3.7 h for 18,870 pages/61 reports in IND-1; and to 2.6 h for 11,425 pages/58 reports in IND-2). Quality scores were 69.6\% and 77.9\% for IND-1 and IND-2. No critical regulatory errors were detected, but deficiencies in emphasis, conciseness, and clarity were noted. Conclusions: AutoIND can dramatically accelerate IND drafting, but expert regulatory writers remain essential to mature outputs to submission-ready quality. Systematic deficiencies identified provide a roadmap for targeted model improvements.
Timely and robust influenza incidence forecasting is critical for public health decision-making. This paper presents MAESTRO (Multi-modal Adaptive Estimation for Temporal Respiratory Disease Outbreak), a novel, unified framework that synergistically integrates advanced spectro-temporal modeling with multi-modal data fusion, including surveillance, web search trends, and meteorological data. By adaptively weighting heterogeneous data sources and decomposing complex time series patterns, the model achieves robust and accurate forecasts. Evaluated on over 11 years of Hong Kong influenza data (excluding the COVID-19 period), MAESTRO demonstrates state-of-the-art performance, achieving a superior model fit with an R-square of 0.956. Extensive ablations confirm the significant contributions of its multi-modal and spectro-temporal components. The modular and reproducible pipeline is made publicly available to facilitate deployment and extension to other regions and pathogens, presenting a powerful tool for epidemiological forecasting.
Background: Parkinson's disease remains a major neurodegenerative disorder with high misdiagnosis rates, primarily due to reliance on clinical rating scales. Recent studies have demonstrated a strong association between gut microbiota and Parkinson's disease, suggesting that microbial composition may serve as a promising biomarker. Although deep learning models based ongut microbiota show potential for early prediction, most approaches rely on single classifiers and often overlook inter-strain correlations or temporal dynamics. Therefore, there is an urgent need for more robust feature extraction methods tailored to microbiome data. Methods: We proposed BDPM (A Machine Learning-Based Feature Extractor for Parkinson's Disease Classification via Gut Microbiota Analysis). First, we collected gut microbiota profiles from 39 Parkinson's patients and their healthy spouses to identify differentially abundant taxa. Second, we developed an innovative feature selection framework named RFRE (Random Forest combined with Recursive Feature Elimination), integrating ecological knowledge to enhance biological interpretability. Finally, we designed a hybrid classification model to capture temporal and spatial patterns in microbiome data.
Quantitative proteomics plays a central role in uncovering regulatory mechanisms, identifying disease biomarkers, and guiding the development of precision therapies. These insights are often obtained through complex Bayesian models, whose inference procedures are computationally intensive, especially when applied at scale to biological datasets. This limits the accessibility of advanced modelling techniques needed to fully exploit proteomics data. Although Sequential Monte Carlo (SMC) methods offer a parallelisable alternative to traditional Markov Chain Monte Carlo, their high-performance implementations often rely on specialised hardware, increasing both financial and energy costs. We address these challenges by introducing an opportunistic computing framework for SMC samplers, tailored to the demands of large-scale proteomics inference. Our approach leverages idle compute resources at the University of Liverpool via HTCondor, enabling scalable Bayesian inference without dedicated high-performance computing infrastructure. Central to this framework is a novel Coordinator-Manager-Follower architecture that reduces synchronisation overhead and supports robust operation in heterogeneous, unreliable environments. We evaluate the framework on a realistic proteomics model and show that opportunistic SMC delivers accurate inference with weak scaling, increasing samples generated under a fixed time budget as more resources join. To support adoption, we release CondorSMC, an open-source package for deploying SMC samplers in opportunistic computing environments.
Patients with rare types of melanoma such as acral, mucosal, or uveal melanoma, have lower survival rates than patients with cutaneous melanoma; these lower survival rates reflect the lower objective response rates to immunotherapy compared to cutaneous melanoma. Understanding tumor-immune dynamics in rare melanomas is critical for the development of new therapies and for improving response rates to current cancer therapies. Progress has been hindered by the lack of clinical data and the need for better preclinical models of rare melanomas. Canine melanoma provides a valuable comparative oncology model for rare types of human melanomas. We analyzed RNA sequencing data from canine melanoma patients and combined this with literature information to create a novel mechanistic mathematical model of melanoma-immune dynamics. Sensitivity analysis of the mathematical model indicated influential pathways in the dynamics, providing support for potential new therapeutic targets and future combinations of therapies. We share our learnings from this work, to help enable the application of this proof-of-concept workflow to other rare disease settings with sparse available data.