Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
The expansion of large language models is increasingly limited by the constrained memory capacity of modern GPUs. To mitigate this, Mixture-of-Experts (MoE) architectures activate only a small portion of parameters during inference, significantly lowering both memory demand and computational overhead. However, conventional MoE inference approaches, which select active experts independently at each layer, often introduce considerable latency because of frequent parameter transfers between host and GPU memory. In addition, current cross-layer prediction strategies, which are typically based on fixed steps, lack adaptability across different hardware platforms and workloads, thereby reducing their robustness and effectiveness. To address these challenges, we present ExpertFlow, a runtime system for MoE inference that combines adaptive expert prefetching and cache-aware routing. ExpertFlow continuously adjusts its prediction horizon for expert activation by leveraging runtime statistics such as transfer bandwidth, parameter dimensionality, and model feedback signals. Furthermore, it incorporates a hybrid cross-layer prediction scheme that fuses pregating information with intermediate computational states to anticipate future expert needs. By adaptively refining prefetching decisions and aligning them with actual usage behavior, ExpertFlow effectively decreases cache misses and removes latency caused by expert swap-ins. Our evaluation demonstrates that ExpertFlow reduces model stall time to less than 0.1% of the baseline, highlighting its capability to optimize MoE inference under stringent memory constraints.
Over-the-air (OTA) federated learning (FL) has been well recognized as a scalable paradigm that exploits the waveform superposition of the wireless multiple-access channel to aggregate model updates in a single use. Existing OTA-FL designs largely enforce zero-bias model updates by either assuming \emph{homogeneous} wireless conditions (equal path loss across devices) or forcing zero-bias updates to guarantee convergence. Under \emph{heterogeneous} wireless scenarios, however, such designs are constrained by the weakest device and inflate the update variance. Moreover, prior analyses of biased OTA-FL largely address convex objectives, while most modern AI models are highly non-convex. Motivated by these gaps, we study OTA-FL with stochastic gradient descent (SGD) for general smooth non-convex objectives under wireless heterogeneity. We develop novel OTA-FL SGD updates that allow a structured, time-invariant model bias while facilitating reduced variance updates. We derive a finite-time stationarity bound (expected time average squared gradient norm) that explicitly reveals a bias-variance trade-off. To optimize this trade-off, we pose a non-convex joint OTA power-control design and develop an efficient successive convex approximation (SCA) algorithm that requires only statistical CSI at the base station. Experiments on a non-convex image classification task validate the approach: the SCA-based design accelerates convergence via an optimized bias and improves generalization over prior OTA-FL baselines.
Communication remains a central bottleneck in large-scale distributed machine learning, and gradient sparsification has emerged as a promising strategy to alleviate this challenge. However, existing gradient compressors face notable limitations: Rand-$K$\ discards structural information and performs poorly in practice, while Top-$K$\ preserves informative entries but loses the contraction property and requires costly All-Gather operations. In this paper, we propose ARC-Top-$K$, an {All-Reduce}-Compatible Top-$K$ compressor that aligns sparsity patterns across nodes using a lightweight sketch of the gradient, enabling index-free All-Reduce while preserving globally significant information. ARC-Top-$K$\ is provably contractive and, when combined with momentum error feedback (EF21M), achieves linear speedup and sharper convergence rates than the original EF21M under standard assumptions. Empirically, ARC-Top-$K$\ matches the accuracy of Top-$K$\ while reducing wall-clock training time by up to 60.7\%, offering an efficient and scalable solution that combines the robustness of Rand-$K$\ with the strong performance of Top-$K$.
We are proposing fully parallel and maximally distributed hardware realization of a generic neuro-computing system. More specifically, the proposal relates to the wireless sensor networks technology to serve as a massively parallel and fully distributed hardware platform to implement and realize artificial neural network (ANN) algorithms. A parallel and distributed (PDP) hardware realization of ANNs makes it possible to have real time computation of large-scale (and complex) problems in a highly robust framework. We will demonstrate how a network of hundreds of thousands of processing nodes (or motes of a wireless sensor network), which have on-board processing and wireless communication features, can be used to implement fully parallel and massively distributed computation of artificial neural network algorithms for solution of truly large-scale problems in real time. The realization of artificial neural network algorithms in a massively parallel and fully distributed hardware has been the goal of neural network computing researchers. This is because a parallel and distributed computation of artificial neural network algorithms could not have been achieved against all the advancements in silicon- or optics-based computing. Accordingly, artificial neural networks could not be applied to very large-scale problems for real time computation of solutions. This hindered the development of neural algorithms for affordable and practical solutions of challenging problems since often special-purpose computing approaches in hardware, software or hybrid (non-neural) had to be developed for and fine-tuned to specific problems that are very large-scale and highly complex. Successful implementation is likely to revolutionize computing as we know it by making it possible to solve very large scale scientific, engineering or technical problems in real time.
Adapting large language models (LLMs) via reinforcement learning (RL) is often bottlenecked by the generation stage, which can consume over 75\% of the training time. Speculative decoding (SD) accelerates autoregressive generation in serving systems, but its behavior under RL training remains largely unexplored. We identify three critical gaps that hinder the naive integration of SD into RL systems: diminishing speedups at large batch sizes, drafter staleness under continual actor updates, and drafter-induced policy degradation. To address these gaps, we present ReSpec, a system that adapts SD to RL through three complementary mechanisms: dynamically tuning SD configurations, evolving the drafter via knowledge distillation, and weighting updates by rollout rewards. On Qwen models (3B--14B), ReSpec achieves up to 4.5x speedup while preserving reward convergence and training stability, providing a practical solution for efficient RL-based LLM adaptation.
CI/CD pipelines are widely used in software development, yet their environmental impact, particularly carbon and water footprints (CWF), remains largely unknown to developers, as CI service providers typically do not disclose such information. With the growing environmental impact of cloud computing, understanding the CWF of CI/CD services has become increasingly important. This work investigates the CWF of using GitHub Actions, focusing on open-source repositories where usage is free and unlimited for standard runners. We build upon a methodology from the Cloud Carbon Footprint framework and we use the largest dataset of workflow runs reported in the literature to date, comprising over 2.2 million workflow runs from more than 18,000 repositories. Our analysis reveals that the GitHub Actions ecosystem results in a substantial CWF. Our estimates for the carbon footprint in 2024 range from 150.5 MTCO2e in the most optimistic scenario to 994.9 MTCO2e in the most pessimistic scenario, while the water footprint ranges from 1,989.6 to 37,664.5 kiloliters. The most likely scenario estimates are 456.9 MTCO2e for carbon footprint and 5,738.2 kiloliters for water footprint. To provide perspective, the carbon footprint in the most likely scenario is equivalent to the carbon captured by 7,615 urban trees in a year, and the water footprint is comparable to the water consumed by an average American family over 5,053 years. We explore strategies to mitigate this impact, primarily by reducing wasted computational resources. Key recommendations include deploying runners in regions whose energy production has a low environmental impact such as France and the United Kingdom, implementing stricter deactivation policies for scheduled runs and aligning their execution with periods when the regional energy mix is more environmentally favorable, and reducing the size of repositories.
Modern machine learning (ML) has grown into a tightly coupled, full-stack ecosystem that combines hardware, software, network, and applications. Many users rely on cloud providers for elastic, isolated, and cost-efficient resources. Unfortunately, these platforms as a service use virtualization, which means operators have little insight into the users' workloads. This hinders resource optimizations by the operator, which is essential to ensure cost efficiency and minimize execution time. In this paper, we argue that workload knowledge is unnecessary for system-level optimization. We propose System-X, which takes a \emph{hardware-centric} approach, relying only on hardware signals -- fully accessible by operators. Using low-level signals collected from the system, System-X detects anomalies through an unsupervised learning pipeline. The pipeline is developed by analyzing over 30 popular ML models on various hardware platforms, ensuring adaptability to emerging workloads and unknown deployment patterns. Using System-X, we successfully identified both network and system configuration issues, accelerating the DeepSeek model by 5.97%.
The rising importance of cryptocurrencies as financial assets pushed their applicability from an object of speculation closer to standard financial instruments such as loans. In this work, we initiate the study of secure protocols that enable fiat-denominated loans collateralized by cryptocurrencies such as Bitcoin. We provide limited-custodial protocols for such loans relying only on trusted arbitration and provide their game-theoretical analysis. We also highlight various interesting directions for future research.
Scaling global aggregations is a challenge for exactly-once stream processing systems. Current systems implement these either by computing the aggregation in a single task instance, or by static aggregation trees, which limits scalability and may become a bottleneck. Moreover, the end-to-end latency is determined by the slowest path in the tree, and failures and reconfiguration cause large latency spikes due to the centralized coordination. Towards these issues, we present Holon Streaming, an exactly-once stream processing system for global aggregations. Its deterministic programming model uses windowed conflict-free replicated data types (Windowed CRDTs), a novel abstraction for shared replicated state. Windowed CRDTs make computing global aggregations scalable. Furthermore, their guarantees such as determinism and convergence enable the design of efficient failure recovery algorithms by decentralized coordination. Our evaluation shows a 5x lower latency and 2x higher throughput than an existing stream processing system on global aggregation workloads, with an 11x latency reduction under failure scenarios. The paper demonstrates the effectiveness of decentralized coordination with determinism, and the utility of Windowed CRDTs for global aggregations.
We revisit the problem of symmetric private information retrieval (SPIR) in settings where the database replication is modeled by a simple graph. Here, each vertex corresponds to a server, and a message is replicated on two servers if and only if there is an edge between them. To satisfy the requirement of database privacy, we let all the servers share some common randomness, independent of the messages. We aim to quantify the improvement in SPIR capacity, i.e., the maximum ratio of the number of desired and downloaded symbols, compared to the setting with graph-replicated common randomness. Towards this, we develop an algorithm to convert a class of PIR schemes into the corresponding SPIR schemes, thereby establishing a capacity lower bound on graphs for which such schemes exist. This includes the class of path and cyclic graphs for which we derive capacity upper bounds that are tighter than the trivial bounds given by the respective PIR capacities. For the special case of path graph with three vertices, we identify the SPIR capacity to be $\frac{1}{2}$.
A team of mobile agents, starting from distinct nodes of a network, have to meet at the same node and declare that they all met. Agents execute the same algorithm, which they start when activated by an adversary or by an agent entering their initial node. When activated, agents traverse edges of the network in synchronous rounds. Their perception and communication are strictly local. This task, known as gathering, is a central problem in distributed mobile systems. Most prior work focuses on minimizing its time complexity, i.e., the worst-case number of rounds between the start of the earliest agent and the task completion. To break possible symmetries, deterministic solutions typically assume that agents have pairwise distinct IDs, called labels, known only to themselves. But must all labels be pairwise distinct to guarantee deterministic gathering? We address this question by considering agents that may share the same label. A team L is said to be gatherable if, for every initial setting of L, there is an algorithm that solves gathering. Our contribution is threefold. (1) We give a full characterization of the gatherable teams. (2) We design an algorithm that gathers all of them in poly$(n,\log\lambda)$ time, where $n$ (resp. $\lambda$) is the graph order (resp. the smallest label in L). This algorithm requires the agents to initially share only $O(\log \log \log \mu)$ bits of common knowledge, where $\mu$ is the largest label multiplicity in L. (3) We show this dependency is almost optimal to get a poly$(n,\log\lambda)$-time complexity. As a by-product, we get the first deterministic poly$(n,\log\lambda)$-time algorithm requiring no common knowledge to gather any team when all labels are distinct. Known to be achievable for two-agent teams, extending this to any team size faced a major challenge: termination detection. Our techniques to address it may be of independent interest.
With the explosive growth of big data, workloads tend to get more complex and computationally demanding. Such applications are processed on distributed interconnected resources that are becoming larger in scale and computational capacity. Data-intensive applications may have different degrees of parallelism and must effectively exploit data locality. Furthermore, they may impose several Quality of Service requirements, such as time constraints and resilience against failures, as well as other objectives, like energy efficiency. These features of the workloads, as well as the inherent characteristics of the computing resources required to process them, present major challenges that require the employment of effective scheduling techniques. In this chapter, a classification of data-intensive workloads is proposed and an overview of the most commonly used approaches for their scheduling in large-scale distributed systems is given. We present novel strategies that have been proposed in the literature and shed light on open challenges and future directions.
The integration of clinical data offers significant potential for the development of personalized medicine. However, its use is severely restricted by the General Data Protection Regulation (GDPR), especially for small cohorts with rare diseases. High-quality, structured data is essential for the development of predictive medical AI. In this case study, we propose a novel, multi-stage approach to secure AI training: (1) The model is designed on a simulated clinical knowledge graph (cKG). This graph is used exclusively to represent the structural characteristics of the real cKG without revealing any sensitive content. (2) The model is then integrated into the FeatureCloud (FC) federated learning framework, where it is prepared in a single-client configuration within a protected execution environment. (3) Training then takes place within the hospital environment on the real cKG, either under the direct supervision of hospital staff or via a fully automated pipeline controlled by the hospital. (4) Finally, verified evaluation scripts are executed, which only return aggregated performance metrics. This enables immediate performance feedback without sensitive patient data or individual predictions, leaving the clinic. A fundamental element of this approach involves the incorporation of a cKG, which serves to organize multi-omics and patient data within the context of real-world hospital environments. This approach was successfully validated during the TUM.ai Makeathon 2024 (TUMaiM24) challenge set by the Dr. von Hauner Children's Hospital (HCH-LMU): 50 students developed models for patient classification and diagnosis without access to real data. Deploying secure algorithms via federated frameworks, such as the FC framework, could be a practical way of achieving privacy-preserving AI in healthcare.
As large language models (LLMs) continue to scale up, mixture-of-experts (MoE) has become a common technology in SOTA models. MoE models rely on expert parallelism (EP) to alleviate memory bottleneck, which introduces all-to-all communication to dispatch and combine tokens across devices. However, in widely-adopted GPU clusters, high-overhead cross-node communication makes all-to-all expensive, hindering the adoption of EP. Recently, wafer-scale chips (WSCs) have emerged as a platform integrating numerous devices on a wafer-sized interposer. WSCs provide a unified high-performance network connecting all devices, presenting a promising potential for hosting MoE models. Yet, their network is restricted to a mesh topology, causing imbalanced communication pressure and performance loss. Moreover, the lack of on-wafer disk leads to high-overhead expert migration on the critical path. To fully unleash this potential, we first propose Entwined Ring Mapping (ER-Mapping), which co-designs the mapping of attention and MoE layers to balance communication pressure and achieve better performance. We find that under ER-Mapping, the distribution of cold and hot links in the attention and MoE layers is complementary. Therefore, to hide the migration overhead, we propose the Non-invasive Balancer (NI-Balancer), which splits a complete expert migration into multiple steps and alternately utilizes the cold links of both layers. Evaluation shows ER-Mapping achieves communication reduction up to 62%. NI-Balancer further delivers 54% and 22% improvements in MoE computation and communication, respectively. Compared with the SOTA NVL72 supernode, the WSC platform delivers an average 39% higher per-device MoE performance owing to its scalability to larger EP.
In the rapidly evolving research on artificial intelligence (AI) the demand for fast, computationally efficient, and scalable solutions has increased in recent years. The problem of optimizing the computing resources for distributed machine learning (ML) and optimization is considered in this paper. Given a set of data distributed over a network of computing-nodes/servers, the idea is to optimally assign the CPU (central processing unit) usage while simultaneously training each computing node locally via its own share of data. This formulates the problem as a co-optimization setup to (i) optimize the data processing and (ii) optimally allocate the computing resources. The information-sharing network among the nodes might be time-varying, but with balanced weights to ensure consensus-type convergence of the algorithm. The algorithm is all-time feasible, which implies that the computing resource-demand balance constraint holds at all iterations of the proposed solution. Moreover, the solution allows addressing possible log-scale quantization over the information-sharing channels to exchange log-quantized data. For some example applications, distributed support-vector-machine (SVM) and regression are considered as the ML training models. Results from perturbation theory, along with Lyapunov stability and eigen-spectrum analysis, are used to prove the convergence towards the optimal case. As compared to existing CPU scheduling solutions, the proposed algorithm improves the cost optimality gap by more than $50\%$.
Neural networks are rapidly gaining popularity in scientific research, but training the models is often very time-consuming. Particularly when the training data samples are large high-dimensional arrays, efficient training methodologies that can reduce the computational costs are crucial. To reduce the training cost, we propose a Multi-Resolution Model Fusion (MRMF) method that combines models trained on reduced-resolution data and then refined with data in the original resolution. We demonstrate that these reduced-resolution models and datasets could be generated quickly. More importantly, the proposed approach reduces the training time by speeding up the model convergence in each fusion stage before switching to the final stage of finetuning with data in its original resolution. This strategy ensures the final model retains high-resolution insights while benefiting from the computational efficiency of lower-resolution training. Our experiment results demonstrate that the multi-resolution model fusion method can significantly reduce end-to-end training time while maintaining the same model accuracy. Evaluated using two real-world scientific applications, CosmoFlow and Neuron Inverter, the proposed method improves the training time by up to 47% and 44%, respectively, as compared to the original resolution training, while the model accuracy is not affected.
Optimistic responsiveness -- the ability of a consensus protocol to operate at the speed of the network -- is widely used in consensus protocol design to optimize latency and throughput. However, blockchain applications incentivize validators to play timing games by strategically delaying their proposals, since increased block time correlates with greater rewards. Consequently, it may appear that responsiveness (even under optimistic conditions) is impossible in blockchain protocols. In this work, we develop a model of timing games in responsive consensus protocols and find a prisoner's dilemma structure, where cooperation (proposing promptly) is in the validators' best interest, but individual incentives encourage validators to delay proposals selfishly. To attain desirable equilibria, we introduce dynamic block rewards that decrease with round time to explicitly incentivize faster proposals. Delays are measured through a voting mechanism, where other validators vote on the current leader's round time. By carefully setting the protocol parameters, the voting mechanism allows validators to coordinate and reach the cooperative equilibrium, benefiting all through a higher rate-of-reward. Thus, instead of responsiveness being an unattainable property due to timing games, we show that responsiveness itself can promote faster block proposals. One consequence of moving from a static to dynamic block reward is that validator utilities become more sensitive to latency, worsening the gap between the best- and worst-connected validators. Our analysis shows, however, that this effect is minor in both theoretical latency models and simulations based on real-world networks.
This paper introduces a novel paradigm for the analysis and verification of concurrent programs -- the Singularity Theory. We model the execution space of a concurrent program as a branched topological space, where program states are points and state transitions are paths. Within this framework, we characterize deadlocks as attractors and livelocks as non-contractible loops in the execution space. By employing tools from algebraic topology, particularly homotopy and homology groups, we define a series of concurrent topological invariants to systematically detect and classify these concurrent "singularities" without exhaustively traversing all states. This work aims to establish a geometric and topological foundation for concurrent program verification, transcending the limitations of traditional model checking.