Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Large language models (LLMs) are increasingly used to assign document relevance labels in information retrieval pipelines, especially in domains lacking human-labeled data. However, different models often disagree on borderline cases, raising concerns about how such disagreement affects downstream retrieval. This study examines labeling disagreement between two open-weight LLMs, LLaMA and Qwen, on a corpus of scholarly abstracts related to Sustainable Development Goals (SDGs) 1, 3, and 7. We isolate disagreement subsets and examine their lexical properties, rank-order behavior, and classification predictability. Our results show that model disagreement is systematic, not random: disagreement cases exhibit consistent lexical patterns, produce divergent top-ranked outputs under shared scoring functions, and are distinguishable with AUCs above 0.74 using simple classifiers. These findings suggest that LLM-based filtering introduces structured variability in document retrieval, even under controlled prompting and shared ranking logic. We propose using classification disagreement as an object of analysis in retrieval evaluation, particularly in policy-relevant or thematic search tasks.
Neuroscience and AI have an intertwined history, largely relayed in the literature of both fields. In recent years, due to the engineering orientations of AI research and the monopoly of industry for its large-scale applications, the mutual expansion of neuroscience and AI in fundamental research seems challenged. In this paper, we bring some empirical evidences that, on the contrary, AI and neuroscience are continuing to grow together, but with a pronounced interest in the fields of study related to neurodegenerative diseases since the 1990s. With a temporal knowledge cartography of neuroscience drawn with advanced document embedding techniques, we draw the dynamical shaping of the discipline since the 1970s and identified the conceptual articulation of AI with this particular subfield mentioned before. However, a further analysis of the underlying citation network of the studied corpus shows that the produced AI technologies remain confined in the different subfields and are not transferred from one subfield to another. This invites us to discuss the genericity capability of AI in the context of an intradisciplinary development, especially in the diffusion of its associated metrology.
This paper aims to comprehensively grasp the research status and development trends of soil microplastics (MPs). It collects studies from the Web of Science Core Collection covering the period from 2013 to 2024. Employing CiteSpace and VOSviewer, the paper conducts in - depth analyses of literature regarding the environmental impacts of microplastics. These analyses involve keyword co - occurrence, clustering, burst term identification, as well as co - occurrence analysis of authors and institutions. Microplastics can accumulate in soil, transfer through food chains, and ultimately affect human health, making the research on them essential for effective pollution control. Focusing on the international research on the impacts of microplastics on soil and ecosystems, the study reveals a steadily increasing trend in the number of publications each year, reaching a peak of 956 articles in 2024. A small number of highly productive authors contribute significantly to the overall research output. The keyword clustering analysis results in ten major clusters, including topics such as plastic pollution and microbial communities. The research on soil microplastics has evolved through three distinct stages: the preliminary exploration phase from 2013 to 2016, the expansion phase from 2017 to 2020, and the integration phase from 2021 to 2024. For future research, multi - level assessments of the impacts of microplastics on soil ecosystems and organisms should be emphasized, in order to fully uncover the associated hazards and develop practical solutions.
The ability to find data is central to the FAIR principles underlying research data stewardship. As with the ability to reuse data, efforts to ensure and enhance findability have historically focused on discoverability of data by other researchers, but there is a growing recognition of the importance of stewarding data in a fashion that makes them FAIR for a wide range of potential reusers and stakeholders. Research institutions are one such stakeholder and have a range of motivations for discovering data, specifically those affiliated with a focal institution, from facilitating compliance with funder provisions to gathering data to inform research data services. However, many research datasets and repositories are not optimized for institutional discovery (e.g., not recording or standardizing affiliation metadata), which creates downstream obstacles to workflows designed for theoretically comprehensive discovery and to metadata-conscious data generators. Here I describe an open-source workflow for institutional tracking of research datasets at the University of Texas at Austin. This workflow comprises a multi-faceted approach that utilizes multiple open application programming interfaces (APIs) in order to address some of the common challenges to institutional discovery, such as variation in whether affiliation metadata are recorded or made public, and if so, how metadata are standardized, structured, and recorded. It is presently able to retrieve more than 4,000 affiliated datasets across nearly 70 distinct platforms, including objects without DOIs and objects without affiliation metadata. However, there remain major gaps that stem from suboptimal practices of both researchers and data repositories, many of which were identified in previous studies and which persist despite significant investment in efforts to standardize and elevate the quality of datasets and their metadata.
We present Digital Collections Explorer, a web-based, open-source exploratory search platform that leverages CLIP (Contrastive Language-Image Pre-training) for enhanced visual discovery of digital collections. Our Digital Collections Explorer can be installed locally and configured to run on a visual collection of interest on disk in just a few steps. Building upon recent advances in multimodal search techniques, our interface enables natural language queries and reverse image searches over digital collections with visual features. This paper describes the system's architecture, implementation, and application to various cultural heritage collections, demonstrating its potential for democratizing access to digital archives, especially those with impoverished metadata. We present case studies with maps, photographs, and PDFs extracted from web archives in order to demonstrate the flexibility of the Digital Collections Explorer, as well as its ease of use. We demonstrate that the Digital Collections Explorer scales to hundreds of thousands of images on a MacBook Pro with an M4 chip. Lastly, we host a public demo of Digital Collections Explorer.
The aim of this paper is to review the use of GenAI in scientometrics, and to begin a debate on the broader implications for the field. First, we provide an introduction on GenAI's generative and probabilistic nature as rooted in distributional linguistics. And we relate this to the debate on the extent to which GenAI might be able to mimic human 'reasoning'. Second, we leverage this distinction for a critical engagement with recent experiments using GenAI in scientometrics, including topic labelling, the analysis of citation contexts, predictive applications, scholars' profiling, and research assessment. GenAI shows promise in tasks where language generation dominates, such as labelling, but faces limitations in tasks that require stable semantics, pragmatic reasoning, or structured domain knowledge. However, these results might become quickly outdated. Our recommendation is, therefore, to always strive to systematically compare the performance of different GenAI models for specific tasks. Third, we inquire whether, by generating large amounts of scientific language, GenAI might have a fundamental impact on our field by affecting textual characteristics used to measure science, such as authors, words, and references. We argue that careful empirical work and theoretical reflection will be essential to remain capable of interpreting the evolving patterns of knowledge production.
Peer review by experts is central to the evaluation of grant proposals, but little is known about how gender and disciplinary differences shape the content and tone of grant peer review reports. We analyzed 39,280 review reports submitted to the Swiss National Science Foundation between 2016 and 2023, covering 11,385 proposals for project funding across 21 disciplines from the Social Sciences and Humanities (SSH), Life Sciences (LS), and Mathematics, Informatics, Natural Sciences, and Technology (MINT). Using supervised machine learning, we classified over 1.3 million sentences by evaluation criteria and sentiment. Reviews in SSH were significantly longer and more critical, with less focus on the applicant's track record, while those in MINT were more concise and positive, with a higher focus on the track record, as compared to those in LS. Compared to male reviewers, female reviewers write longer reviews that more closely align with the evaluation criteria and express more positive sentiments. Female applicants tend to receive reviews with slightly more positive sentiment than male applicants. Gender and disciplinary culture influence how grant proposals are reviewed - shaping the tone, length, and focus of peer review reports. These differences have important implications for fairness and consistency in research funding.
Benchmark data sets are a cornerstone of machine learning development and applications, ensuring new methods are robust, reliable and competitive. The relative rarity of benchmark sets in computational science, due to the uniqueness of the problems and the pace of change in the associated domains, makes evaluating new innovations difficult for computational scientists. In this paper a new tool is developed and tested to potentially turn any of the increasing numbers of scientific data sets made openly available into a benchmark accessible to the community. BenchMake uses non-negative matrix factorisation to deterministically identify and isolate challenging edge cases on the convex hull (the smallest convex set that contains all existing data instances) and partitions a required fraction of matched data instances into a testing set that maximises divergence and statistical significance, across tabular, graph, image, signal and textual modalities. BenchMake splits are compared to establish splits and random splits using ten publicly available benchmark sets from different areas of science, with different sizes, shapes, distributions.
Scientific behavior is often characterized by a tension between building upon established knowledge and introducing novel ideas. Here, we investigate whether this tension is reflected in the relationship between the similarity of a scientific paper to previous research and its eventual citation rate. To operationalize similarity to previous research, we introduce two complementary metrics to characterize the local geometry of a publication's semantic neighborhood: (1) \emph{density} ($\rho$), defined as the ratio between a fixed number of previously-published papers and the minimum distance enclosing those papers in a semantic embedding space, and (2) asymmetry ($\alpha$), defined as the average directional difference between a paper and its nearest neighbors. We tested the predictive relationship between these two metrics and its subsequent citation rate using a Bayesian hierarchical regression approach, surveying $\sim 53,000$ publications across nine academic disciplines and five different document embeddings. While the individual effects of $\rho$ on citation count are small and variable, incorporating density-based predictors consistently improves out-of-sample prediction when added to baseline models. These results suggest that the density of a paper's surrounding scientific literature may carry modest but informative signals about its eventual impact. Meanwhile, we find no evidence that publication asymmetry improves model predictions of citation rates. Our work provides a scalable framework for linking document embeddings to scientometric outcomes and highlights new questions regarding the role that semantic similarity plays in shaping the dynamics of scientific reward.
Mathematical researchers, especially those in early-career positions, face critical decisions about topic specialization with limited information about the collaborative environments of different research areas. The aim of this paper is to study how the popularity of a research topic is associated with the structure of that topic's collaboration network, as observed by a suite of measures capturing organizational structure at several scales. We apply these measures to 1,938 algorithmically discovered topics across 121,391 papers sourced from arXiv metadata during the period 2020--2025. Our analysis, which controls for the confounding effects of network size, reveals a structural dichotomy--we find that popular topics organize into modular "schools of thought," while niche topics maintain hierarchical core-periphery structures centered around established experts. This divide is not an artifact of scale, but represents a size-independent structural pattern correlated with popularity. We also document a "constraint reversal": after controlling for size, researchers in popular fields face greater structural constraints on collaboration opportunities, contrary to conventional expectations. Our findings suggest that topic selection is an implicit choice between two fundamentally different collaborative environments, each with distinct implications for a researcher's career. To make these structural patterns transparent to the research community, we developed the Math Research Compass (https://mathresearchcompass.com), an interactive platform providing data on topic popularity and collaboration patterns across mathematical topics.
Persistence is often regarded as a virtue in science. In this paper, however, we challenge this conventional view by highlighting its contextual nature, particularly how persistence can become a liability during periods of paradigm shift. We focus on the deep learning revolution catalyzed by AlexNet in 2012. Analyzing the 20-year career trajectories of over 5,000 scientists who were active in top machine learning venues during the preceding decade, we examine how their research focus and output evolved. We first uncover a dynamic period in which leading venues increasingly prioritized cutting-edge deep learning developments that displaced relatively traditional statistical learning methods. Scientists responded to these changes in markedly different ways. Those who were previously successful or affiliated with old teams adapted more slowly, experiencing what we term a rigidity penalty - a reluctance to embrace new directions leading to a decline in scientific impact, as measured by citation percentile rank. In contrast, scientists who pursued strategic adaptation - selectively pivoting toward emerging trends while preserving weak connections to prior expertise - reaped the greatest benefits. Taken together, our macro- and micro-level findings show that scientific breakthroughs act as mechanisms that reconfigure power structures within a field.
Everyday, a vast stream of research documents is submitted to conferences, anthologies, journals, newsletters, annual reports, daily papers, and various periodicals. Many such publications use independent external specialists to review submissions. This process is called peer review, and the reviewers are called referees. However, it is not always possible to pick the best referee for reviewing. Moreover, new research fields are emerging in every sector, and the number of research papers is increasing dramatically. To review all these papers, every journal assigns a small team of referees who may not be experts in all areas. For example, a research paper in communication technology should be reviewed by an expert from the same field. Thus, efficiently selecting the best reviewer or referee for a research paper is a big challenge. In this research, we propose and implement program that uses a new strategy to automatically select the best reviewers for a research paper. Every research paper contains references at the end, usually from the same area. First, we collect the references and count authors who have at least one paper in the references. Then, we automatically browse the web to extract research topic keywords. Next, we search for top researchers in the specific topic and count their h-index, i10-index, and citations for the first n authors. Afterward, we rank the top n authors based on a score and automatically browse their homepages to retrieve email addresses. We also check their co-authors and colleagues online and discard them from the list. The remaining top n authors, generally professors, are likely the best referees for reviewing the research paper.
A 20-year analysis of CrossRef metadata demonstrates that global scholarly output -- encompassing publications, retractions, and preprints -- exhibits strikingly inertial growth, well-described by exponential, quadratic, and logistic models with nearly indistinguishable goodness-of-fit. Retraction dynamics, in particular, remain stable and minimally affected by the COVID-19 shock, which contributed less than 1% to total notices. Since 2004, publications doubled every 9.8 years, retractions every 11.4 years, and preprints at the fastest rate, every 5.6 years. The findings underscore a system primed for ongoing stress at unchanged structural bottlenecks. Although model forecasts diverge beyond 2024, the evidence suggests that the future trajectory of scholarly communication will be determined by persistent systemic inertia rather than episodic disruptions -- unless intentionally redirected by policy or AI-driven reform.
In this project, we semantically enriched and enhanced the metadata of long text documents, theses and dissertations, retrieved from the HathiTrust Digital Library in English published from 1920 to 2020 through a combination of manual efforts and large language models. This dataset provides a valuable resource for advancing research in areas such as computational social science, digital humanities, and information science. Our paper shows that enriching metadata using LLMs is particularly beneficial for digital repositories by introducing additional metadata access points that may not have originally been foreseen to accommodate various content types. This approach is particularly effective for repositories that have significant missing data in their existing metadata fields, enhancing search results and improving the accessibility of the digital repository.
Preprints have become increasingly essential in the landscape of open science, facilitating not only the exchange of knowledge within the scientific community but also bridging the gap between science and technology. However, the impact of preprints on technological innovation, given their unreviewed nature, remains unclear. This study fills this gap by conducting a comprehensive scientometric analysis of patent citations to bioRxiv preprints submitted between 2013 and 2021, measuring and accessing the contribution of preprints in accelerating knowledge transfer from science to technology. Our findings reveal a growing trend of patent citations to bioRxiv preprints, with a notable surge in 2020, primarily driven by the COVID-19 pandemic. Preprints play a critical role in accelerating innovation, not only expedite the dissemination of scientific knowledge into technological innovation but also enhance the visibility of early research results in the patenting process, while journals remain essential for academic rigor and reliability. The substantial number of post-online-publication patent citations highlights the critical role of the open science model-particularly the "open access" effect of preprints-in amplifying the impact of science on technological innovation. This study provides empirical evidence that open science policies encouraging the early sharing of research outputs, such as preprints, contribute to more efficient linkage between science and technology, suggesting an acceleration in the pace of innovation, higher innovation quality, and economic benefits.
The explosive growth of AI research has driven paper submissions at flagship AI conferences to unprecedented levels, necessitating many venues in 2025 (e.g., CVPR, ICCV, KDD, AAAI, IJCAI, WSDM) to enforce strict per-author submission limits and to desk-reject any excess papers by simple ID order. While this policy helps reduce reviewer workload, it may unintentionally discard valuable papers and penalize authors' efforts. In this paper, we ask an essential research question on whether it is possible to follow submission limits while minimizing needless rejections. We first formalize the current desk-rejection policies as an optimization problem, and then develop a practical algorithm based on linear programming relaxation and a rounding scheme. Under extensive evaluation on 11 years of real-world ICLR (International Conference on Learning Representations) data, our method preserves up to $19.23\%$ more papers without violating any author limits. Moreover, our algorithm is highly efficient in practice, with all results on ICLR data computed within at most 53.64 seconds. Our work provides a simple and practical desk-rejection strategy that significantly reduces unnecessary rejections, demonstrating strong potential to improve current CS conference submission policies.
This paper reconceptualises peer review as structured public commentary. Traditional academic validation is hindered by anonymity, latency, and gatekeeping. We propose a transparent, identity-linked, and reproducible system of scholarly evaluation anchored in open commentary. Leveraging blockchain for immutable audit trails and AI for iterative synthesis, we design a framework that incentivises intellectual contribution, captures epistemic evolution, and enables traceable reputational dynamics. This model empowers fields from computational science to the humanities, reframing academic knowledge as a living process rather than a static credential.
Mathematical oncology is an interdisciplinary research field where mathematics meets cancer research. The field's intention to study cancer makes it dynamic, as practicing researchers are incentivised to quickly adapt to both technical and medical research advances. Determining the scope of mathematical oncology is therefore not straightforward; however, it is important for purposes related to funding allocation, education, scientific communication, and community organisation. To address this issue, we here conduct a bibliometric analysis of mathematical oncology. We compare our results to the broader field of mathematical biology, and position our findings within theoretical science of science frameworks. Based on article metadata and citation flows, our results provide evidence that mathematical oncology has undergone a significant evolution since the 1960s marked by increased interactions with other disciplines, geographical expansion, larger research teams, and greater diversity in studied topics. The latter finding contributes to the greater discussion on which models different research communities consider to be valuable in the era of big data and machine learning. Further, the results presented in this study quantitatively motivate that international collaboration networks should be supported to enable new countries to enter and remain in the field, and that mathematical oncology benefits both mathematics and the life sciences.