Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
This study investigates the rapid growth and evolving network structure of Bluesky from August 2023 to February 2025. Through multiple waves of user migrations, the platform has reached a stable, persistently active user base. The growth process has given rise to a dense follower network with clustering and hub features that favor viral information diffusion. These developments highlight engagement and structural similarities between Bluesky and established platforms.
Social media bots are AI agents that participate in online conversations. Most studies focus on the general bot and the malicious nature of these agents. However, bots have many different personas, each specialized towards a specific behavioral or content trait. Neither are bots singularly bad, because they are used for both good and bad information dissemination. In this article, we introduce fifteen agent personas of social media bots. These personas have two main categories: Content-Based Bot Persona and Behavior-Based Bot Persona. We also form yardsticks of the good-bad duality of the bots, elaborating on metrics of good and bad bot agents. Our work puts forth a guideline to inform bot detection regulation, emphasizing that policies should focus on how these agents are employed, rather than collectively terming bot agents as bad.
As international competition intensifies in technologies, nations need to identify key technologies to foster innovation. However, the identification is difficult because a technology is independent, therefore has complex nature. Here, this study aims to assess patent technological fields by applying Technological Complexity Index from a corporate perspective, addressing its underutilization in Japan despite its potential. By utilizing carefully processed patent data from fiscal years 1981 to 2010, we analyze the bipartite network which consists of 1,938 corporations and 35 or 124 technological fields. Our findings provide quantitative characteristics of ubiquity and sophistication for patent fields, the detailed technological trends that reflect the social context, and methodological stability for policymakers and researchers, contributing to targeted innovation strategies in Japan.
Cryptocurrency users increasingly rely on obfuscation techniques such as mixers, swappers, and decentralised or no-KYC exchanges to protect their anonymity. However, at the same time, these services are exploited by criminals to conceal and launder illicit funds. Among obfuscation services, mixers remain one of the most challenging entities to tackle. This is because their owners are often unwilling to cooperate with Law Enforcement Agencies, and technically, they operate as 'black boxes'. To better understand their functionalities, this paper proposes an approach to analyse the operations of mixers by examining their address-transaction graphs and identifying topological similarities to uncover common patterns that can define the mixer's modus operandi. The approach utilises community detection algorithms to extract dense topological structures and clustering algorithms to group similar communities. The analysis is further enriched by incorporating data from external sources related to known Exchanges, in order to understand their role in mixer operations. The approach is applied to dissect the Blender.io mixer activities within the Bitcoin blockchain, revealing: i) consistent structural patterns across address-transaction graphs; ii) that Exchanges play a key role, following a well-established pattern, which raises several concerns about their AML/KYC policies. This paper represents an initial step toward dissecting and understanding the complex nature of mixer operations in cryptocurrency networks and extracting their modus operandi.
Drug overdose remains a critical global health issue, often driven by misuse of opioids, painkillers, and psychiatric medications. Traditional research methods face limitations, whereas social media offers real-time insights into self-reported substance use and overdose symptoms. This study proposes an AI-driven NLP framework trained on annotated social media data to detect commonly used drugs and associated overdose symptoms. Using a hybrid annotation strategy with LLMs and human annotators, we applied traditional ML models, neural networks, and advanced transformer-based models. Our framework achieved 98% accuracy in multi-class and 97% in multi-label classification, outperforming baseline models by up to 8%. These findings highlight the potential of AI for supporting public health surveillance and personalized intervention strategies.
Graph Neural Networks (GNNs) struggle to balance heterophily and homophily in representation learning, a challenge further amplified in self-supervised settings. We propose H$^3$GNNs, an end-to-end self-supervised learning framework that harmonizes both structural properties through two key innovations: (i) Joint Structural Node Encoding. We embed nodes into a unified space combining linear and non-linear feature projections with K-hop structural representations via a Weighted Graph Convolution Network(WGCN). A cross-attention mechanism enhances awareness and adaptability to heterophily and homophily. (ii) Self-Supervised Learning Using Teacher-Student Predictive Architectures with Node-Difficulty Driven Dynamic Masking Strategies. We use a teacher-student model, the student sees the masked input graph and predicts node features inferred by the teacher that sees the full input graph in the joint encoding space. To enhance learning difficulty, we introduce two novel node-predictive-difficulty-based masking strategies. Experiments on seven benchmarks (four heterophily datasets and three homophily datasets) confirm the effectiveness and efficiency of H$^3$GNNs across diverse graph types. Our H$^3$GNNs achieves overall state-of-the-art performance on the four heterophily datasets, while retaining on-par performance to previous state-of-the-art methods on the three homophily datasets.
Community detection, the unsupervised task of clustering nodes of a graph, finds applications across various fields. The common approaches for community detection involve optimizing an objective function to partition the nodes into communities at a single scale of granularity. However, the single-scale approaches often fall short of producing partitions that are robust and at a suitable scale. The existing algorithm, PyGenStability, returns multiple robust partitions for a network by optimizing the multi-scale Markov stability function. However, in cases where the suitable scale is not known or assumed by the user, there is no principled method to select a single robust partition at a suitable scale from the multiple partitions that PyGenStability produces. Our proposed method combines the Markov stability framework with a pre-trained machine learning model for scale selection to obtain one robust partition at a scale that is learned based on the graph structure. This automatic scale selection involves using a gradient boosting model pre-trained on hand-crafted and embedding-based network features from a labeled dataset of 10k benchmark networks. This model was trained to predicts the scale value that maximizes the similarity of the output partition to the planted partition of the benchmark network. Combining our scale selection algorithm with the PyGenStability algorithm results in PyGenStabilityOne (PO): a hyperparameter-free multi-scale community detection algorithm that returns one robust partition at a suitable scale without the need for any assumptions, input, or tweaking from the user. We compare the performance of PO against 29 algorithms and show that it outperforms 25 other algorithms by statistically meaningful margins. Our results facilitate choosing between community detection algorithms, among which PO stands out as the accurate, robust, and hyperparameter-free method.
Masculine defaults are widely recognized as a significant type of gender bias, but they are often unseen as they are under-researched. Masculine defaults involve three key parts: (i) the cultural context, (ii) the masculine characteristics or behaviors, and (iii) the reward for, or simply acceptance of, those masculine characteristics or behaviors. In this work, we study discourse-based masculine defaults, and propose a twofold framework for (i) the large-scale discovery and analysis of gendered discourse words in spoken content via our Gendered Discourse Correlation Framework (GDCF); and (ii) the measurement of the gender bias associated with these gendered discourse words in LLMs via our Discourse Word-Embedding Association Test (D-WEAT). We focus our study on podcasts, a popular and growing form of social media, analyzing 15,117 podcast episodes. We analyze correlations between gender and discourse words -- discovered via LDA and BERTopic -- to automatically form gendered discourse word lists. We then study the prevalence of these gendered discourse words in domain-specific contexts, and find that gendered discourse-based masculine defaults exist in the domains of business, technology/politics, and video games. Next, we study the representation of these gendered discourse words from a state-of-the-art LLM embedding model from OpenAI, and find that the masculine discourse words have a more stable and robust representation than the feminine discourse words, which may result in better system performance on downstream tasks for men. Hence, men are rewarded for their discourse patterns with better system performance by one of the state-of-the-art language models -- and this embedding disparity is a representational harm and a masculine default.
Decentralised Autonomous Organisations (DAOs) automate governance and resource allocation through smart contracts, aiming to shift decision-making to distributed token holders. However, many DAOs face sustainability challenges linked to limited user participation, concentrated voting power, and technical design constraints. This paper addresses these issues by identifying research gaps in DAO evaluation and introducing a framework of Key Performance Indicators (KPIs) that capture governance efficiency, financial robustness, decentralisation, and community engagement. We apply the framework to a custom-built dataset of real-world DAOs constructed from on-chain data and analysed using non-parametric methods. The results reveal recurring governance patterns, including low participation rates and high proposer concentration, which may undermine long-term viability. The proposed KPIs offer a replicable, data-driven method for assessing DAO governance structures and identifying potential areas for improvement. These findings support a multidimensional approach to evaluating decentralised systems and provide practical tools for researchers and practitioners working to improve the resilience and effectiveness of DAO-based governance models.
Influence Maximization (IM) in temporal graphs focuses on identifying influential "seeds" that are pivotal for maximizing network expansion. We advocate defining these seeds through Influence Propagation Paths (IPPs), which is essential for scaling up the network. Our focus lies in efficiently labeling IPPs and accurately predicting these seeds, while addressing the often-overlooked cold-start issue prevalent in temporal networks. Our strategy introduces a motif-based labeling method and a tensorized Temporal Graph Network (TGN) tailored for multi-relational temporal graphs, bolstering prediction accuracy and computational efficiency. Moreover, we augment cold-start nodes with new neighbors from historical data sharing similar IPPs. The recommendation system within an online team-based gaming environment presents subtle impact on the social network, forming multi-relational (i.e., weak and strong) temporal graphs for our empirical IM study. We conduct offline experiments to assess prediction accuracy and model training efficiency, complemented by online A/B testing to validate practical network growth and the effectiveness in addressing the cold-start issue.
With a folk understanding that political polarization refers to socio-political divisions within a society, many have proclaimed that we are more divided than ever. In this account, polarization has been blamed for populism, the erosion of social cohesion, the loss of trust in the institutions of democracy, legislative dysfunction, and the collective failure to address existential risks such as Covid-19 or climate change. However, at a global scale there is surprisingly little academic literature which conclusively supports these claims, with half of all studies being U.S.-focused. Here, we provide an overview of the global state of research on polarization, highlighting insights that are robust across countries, those unique to specific contexts, and key gaps in the literature. We argue that addressing these gaps is urgent, but has been hindered thus far by systemic and cultural barriers, such as regionally stratified restrictions on data access and misaligned research incentives. If continued cross-disciplinary inertia means that these disparities are left unaddressed, we see a substantial risk that countries will adopt policies to tackle polarization based on inappropriate evidence, risking flawed decision-making and the weakening of democratic institutions.
Understanding community structures is crucial for analyzing networks, as nodes join communities that collectively shape large-scale networks. In real-world settings, the formation of communities is often impacted by several social factors, such as ethnicity, gender, wealth, or other attributes. These factors may introduce structural inequalities; for instance, real-world networks can have a few majority groups and many minority groups. Community detection algorithms, which identify communities based on network topology, may generate unfair outcomes if they fail to account for existing structural inequalities, particularly affecting underrepresented groups. In this work, we propose a set of novel group fairness metrics to assess the fairness of community detection methods. Additionally, we conduct a comparative evaluation of the most common community detection methods, analyzing the trade-off between performance and fairness. Experiments are performed on synthetic networks generated using LFR, ABCD, and HICH-BA benchmark models, as well as on real-world networks. Our results demonstrate that the fairness-performance trade-off varies widely across methods, with no single class of approaches consistently excelling in both aspects. We observe that Infomap and Significance methods are high-performing and fair with respect to different types of communities across most networks. The proposed metrics and findings provide valuable insights for designing fair and effective community detection algorithms.
Social media plays a crucial role in shaping society, often amplifying polarization and spreading misinformation. These effects stem from complex dynamics involving user interactions, individual traits, and recommender algorithms driving content selection. Recommender systems, which significantly shape the content users see and decisions they make, offer an opportunity for intervention and regulation. However, assessing their impact is challenging due to algorithmic opacity and limited data availability. To effectively model user decision-making, it is crucial to recognize the recommender system adopted by the platform. This work introduces a method for Automatic Recommender Recognition using Graph Neural Networks (GNNs), based solely on network structure and observed behavior. To infer the hidden recommender, we first train a Recommender Neutral User model (RNU) using a GNN and an adapted hindsight academic network recommender, aiming to reduce reliance on the actual recommender in the data. We then generate several Recommender Hypothesis-specific Synthetic Datasets (RHSD) by combining the RNU with different known recommenders, producing ground truths for testing. Finally, we train Recommender Hypothesis-specific User models (RHU) under various hypotheses and compare each candidate with the original used to generate the RHSD. Our approach enables accurate detection of hidden recommenders and their influence on user behavior. Unlike audit-based methods, it captures system behavior directly, without ad hoc experiments that often fail to reflect real platforms. This study provides insights into how recommenders shape behavior, aiding efforts to reduce polarization and misinformation.
Subgraph counting the task of determining the number of instances of a query pattern within a large graph lies at the heart of many critical applications, from analyzing financial networks and transportation systems to understanding biological interactions. Despite decades of work yielding efficient algorithmic (AL) solutions and, more recently, machine learning (ML) approaches, a clear comparative understanding is elusive. This gap stems from the absence of a unified evaluation framework, standardized datasets, and accessible ground truths, all of which hinder systematic analysis and fair benchmarking. To overcome these barriers, we introduce BEACON: a comprehensive benchmark designed to rigorously evaluate both AL and ML-based subgraph counting methods. BEACON provides a standardized dataset with verified ground truths, an integrated evaluation environment, and a public leaderboard, enabling reproducible and transparent comparisons across diverse approaches. Our extensive experiments reveal that while AL methods excel in efficiently counting subgraphs on very large graphs, they struggle with complex patterns (e.g., those exceeding six nodes). In contrast, ML methods are capable of handling larger patterns but demand massive graph data inputs and often yield suboptimal accuracy on small, dense graphs. These insights not only highlight the unique strengths and limitations of each approach but also pave the way for future advancements in subgraph counting techniques. Overall, BEACON represents a significant step towards unifying and accelerating research in subgraph counting, encouraging innovative solutions and fostering a deeper understanding of the trade-offs between algorithmic and machine learning paradigms.
In contemporary society, widespread social media usage is evident in people's daily lives. Nevertheless, disparities in emotional expressions between the real world and online platforms can manifest. We comprehensively analyzed Persian community on X to explore this phenomenon. An innovative pipeline was designed to measure the similarity between emotions in the real world compared to social media. Accordingly, recent tweets and images of participants were gathered and analyzed using Transformers-based text and image sentiment analysis modules. Each participant's friends also provided insights into the their real-world emotions. A distance criterion was used to compare real-world feelings with virtual experiences. Our study encompassed N=105 participants, 393 friends who contributed their perspectives, over 8,300 collected tweets, and 2,000 media images. Results indicated a 28.67% similarity between images and real-world emotions, while tweets exhibited a 75.88% alignment with real-world feelings. Additionally, the statistical significance confirmed that the observed disparities in sentiment proportions.
Social interactions among classroom peers, represented as social learning networks (SLNs), play a crucial role in enhancing learning outcomes. While SLN analysis has recently garnered attention, most existing approaches rely on centralized training, where data is aggregated and processed on a local/cloud server with direct access to raw data. However, in real-world educational settings, such direct access across multiple classrooms is often restricted due to privacy concerns. Furthermore, training models on isolated classroom data prevents the identification of common interaction patterns that exist across multiple classrooms, thereby limiting model performance. To address these challenges, we propose one of the first frameworks that integrates Federated Learning (FL), a distributed and collaborative machine learning (ML) paradigm, with SLNs derived from students' interactions in multiple classrooms' online forums to predict future link formations (i.e., interactions) among students. By leveraging FL, our approach enables collaborative model training across multiple classrooms while preserving data privacy, as it eliminates the need for raw data centralization. Recognizing that each classroom may exhibit unique student interaction dynamics, we further employ model personalization techniques to adapt the FL model to individual classroom characteristics. Our results demonstrate the effectiveness of our approach in capturing both shared and classroom-specific representations of student interactions in SLNs. Additionally, we utilize explainable AI (XAI) techniques to interpret model predictions, identifying key factors that influence link formation across different classrooms. These insights unveil the drivers of social learning interactions within a privacy-preserving, collaborative, and distributed ML framework -- an aspect that has not been explored before.
The emergence of generative AI chatbots such as ChatGPT has prompted growing public and academic interest in their role as informal mental health support tools. While early rule-based systems have been around for several years, large language models (LLMs) offer new capabilities in conversational fluency, empathy simulation, and availability. This study explores how users engage with LLMs as mental health tools by analyzing over 10,000 TikTok comments from videos referencing LLMs as mental health tools. Using a self-developed tiered coding schema and supervised classification models, we identify user experiences, attitudes, and recurring themes. Results show that nearly 20% of comments reflect personal use, with these users expressing overwhelmingly positive attitudes. Commonly cited benefits include accessibility, emotional support, and perceived therapeutic value. However, concerns around privacy, generic responses, and the lack of professional oversight remain prominent. It is important to note that the user feedback does not indicate which therapeutic framework, if any, the LLM-generated output aligns with. While the findings underscore the growing relevance of AI in everyday practices, they also highlight the urgent need for clinical and ethical scrutiny in the use of AI for mental health support.
Large language models (LLMs) demonstrate the ability to simulate human decision-making processes, enabling their use as agents in modeling sophisticated social networks, both offline and online. Recent research has explored collective behavioral patterns and structural characteristics of LLM agents within simulated networks. However, empirical comparisons between LLM-driven and human-driven online social networks remain scarce, limiting our understanding of how LLM agents differ from human users. This paper presents a large-scale analysis of Chirper.ai, an X/Twitter-like social network entirely populated by LLM agents, comprising over 65,000 agents and 7.7 million AI-generated posts. For comparison, we collect a parallel dataset from Mastodon, a human-driven decentralized social network, with over 117,000 users and 16 million posts. We examine key differences between LLM agents and humans in posting behaviors, abusive content, and social network structures. Our findings provide critical insights into the evolving landscape of online social network analysis in the AI era, offering a comprehensive profile of LLM agents in social simulations.