Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
As renewable energy integration, sector coupling, and spatiotemporal detail increase, energy system optimization models grow in size and complexity, often pushing solvers to their performance limits. This systematic review explores parallelization strategies that can address these challenges. We first propose a classification scheme for linear energy system optimization models, covering their analytical focus, mathematical structure, and scope. We then review parallel decomposition methods, finding that while many offer performance benefits, no single approach is universally superior. The lack of standardized benchmark suites further complicates comparison. To address this, we recommend essential criteria for future benchmarks and minimum reporting standards. We also survey available software tools for parallel decomposition, including modular frameworks and algorithmic abstractions. Though centered on energy system models, our insights extend to the broader operations research field.
Numerous applications, such as Krylov subspace solvers, make extensive use of the block classical Gram-Schmidt (BCGS) algorithm and its reorthogonalized variants for orthogonalizing a set of vectors. For large-scale problems in distributed memory settings, the communication cost, particularly the global synchronization cost, is a major performance bottleneck. In recent years, many low-synchronization BCGS variants have been proposed in an effort to reduce the number of synchronization points. The work [E. Carson, Y. Ma, arXiv preprint 2411.07077] recently proposed stable one-synchronization and two-synchronization variants of BCGS, i.e., BCGSI+P-1S and BCGSI+P-2S. In this work, we evaluate the performance of BCGSI+P-1S and BCGSI+P-2S on a distributed memory system compared to other well-known low-synchronization BCGS variants. In comparison to the classical reorthogonalized BCGS algorithm (BCGSI+), numerical experiments demonstrate that BCGSI+P-1S and BCGSI+P-2S can achieve up to 4 times and 2 times speedups, respectively, and perform similarly to other (less stable) one-synchronization and two-synchronization variants. BCGSI+P-1S and BCGSI+P-2S are therefore recommended as the best choice in practice for computing an economic QR factorization on distributed memory systems due to their superior stability when compared to other variants with the same synchronization cost.
The development of Cloud-Edge-IoT applications requires robust programming models. Existing models often struggle to manage the dynamic and stateful nature of these applications effectively. This paper introduces the Collaborative State Machines (CSM) programming model to address these complexities. CSM facilitates the development of reactive, event-driven, and stateful applications targeting the Cloud-Edge-IoT continuum. Applications built with CSM are composed of state machines that collaborate autonomously and can be distributed across different layers of the continuum. Key features of CSM include (i) a sophisticated collaboration mechanism among state machines utilizing events and persistent data; (ii) encapsulation of state through the inherent state of state machines and persistent data; (iii) integration of actions and service invocations within states and state transitions, thereby decoupling complex application logic from compute and data processing services; and (iv) an advanced data model that supports the processing of local, static, and persistent data with defined scope and lifetime. In addition to introducing the CSM programming model, we present a runtime system and a comprehensive evaluation of our approach. This evaluation is based on three use cases: a stress test on a large-scale infrastructure, a surveillance system application, and a complex smart factory scenario, all deployed on the Grid'5000 testbed. Our results demonstrate a 12x increase in throughput through novel language features in the stress test. Compared to Serverless Workflow, a state-of-the-art baseline system, we show a 2.3x improvement in processing time per processed image in a surveillance system use case, a 55x reduction in total processing time for a smart factory use case, and an overall improvement in productivity across these use cases.
Skiplists are widely used for in-memory indexing in many key-value stores, such as RocksDB and LevelDB, due to their ease of implementation and simple concurrency control mechanisms. However, traditional skiplists suffer from poor cache locality, as they store only a single element per node, leaving performance on the table. Minimizing last-level cache misses is key to maximizing in-memory index performance, making high cache locality essential. In this paper, we present a practical concurrent B-skiplist that enhances cache locality and performance while preserving the simplicity of traditional skiplist structures and concurrency control schemes. Our key contributions include a top-down, single-pass insertion algorithm for B-skiplists and a corresponding simple and efficient top-down concurrency control scheme. On 128 threads, the proposed concurrent B-skiplist achieves between 2x-9x higher throughput compared to state-of-the-art concurrent skiplist implementations, including Facebook's concurrent skiplist from Folly and the Java ConcurrentSkipListMap. Furthermore, we find that the B-skiplist achieves competitive (0.9x-1.7x) throughput on point workloads compared to state-of-the-art cache-optimized tree-based indices (e.g., Masstree). For a more complete picture of the performance, we also measure the latency of skiplist and tree-based indices and find that the B-skiplist achieves between 3.5x-103x lower 99% latency compared to other concurrent skiplists and between 0.85x-64x lower 99% latency compared to tree-based indices on point workloads with inserts.
Choosing the right resource can speed up job completion, better utilize the available hardware, and visibly reduce costs, especially when renting computers in the cloud. This was demonstrated in earlier studies on HEPCloud. However, the benchmarking of the resources proved to be a laborious and time-consuming process. This paper presents GlideinBenchmark, a new Web application leveraging the pilot infrastructure of GlideinWMS to benchmark resources, and it shows how to use the data collected and published by GlideinBenchmark to automate the optimal selection of resources. An experiment can select the benchmark or the set of benchmarks that most closely evaluate the performance of its workflows. GlideinBenchmark, with the help of the GlideinWMS Factory, controls the benchmark execution. Finally, a scheduler like HEPCloud's Decision Engine can use the results to optimize resource provisioning.
GlideinWMS is a workload manager provisioning resources for many experiments, including CMS and DUNE. The software is distributed both as native packages and specialized production containers. Following an approach used in other communities like web development, we built our workspaces, system-like containers to ease development and testing. Developers can change the source tree or check out a different branch and quickly reconfigure the services to see the effect of their changes. In this paper, we will talk about what differentiates workspaces from other containers. We will describe our base system, composed of three containers: a one-node cluster including a compute element and a batch system, a GlideinWMS Factory controlling pilot jobs, and a scheduler and Frontend to submit jobs and provision resources. Additional containers can be used for optional components. This system can easily run on a laptop, and we will share our evaluation of different container runtimes, with an eye for ease of use and performance. Finally, we will talk about our experience as developers and with students. The GlideinWMS workspaces are easily integrated with IDEs like VS Code, simplifying debugging and allowing development and testing of the system even when offline. They simplified the training and onboarding of new team members and summer interns. And they were useful in workshops where students could have first-hand experience with the mechanisms and components that, in production, run millions of jobs.
Modern deployment of large language models (LLMs) frequently involves both inference serving and continuous retraining to stay aligned with evolving data and user feedback. Common practices separate these workloads onto distinct servers in isolated phases, causing substantial inefficiencies (e.g., GPU idleness) and delayed adaptation to new data in distributed settings. Our empirical analysis reveals that these inefficiencies stem from dynamic request arrivals during serving and workload heterogeneity in pipeline-parallel training. To address these challenges, we propose LeMix, a system for co-locating and managing concurrent LLM serving and training workloads. LeMix integrates offline profiling, execution prediction mechanisms, and runtime scheduling to dynamically adapt resource allocation based on workload characteristics and system conditions. By understanding task-specific behaviors and co-execution interference across shared nodes, LeMix improves utilization and serving quality without compromising serving responsiveness. Our evaluation shows that LeMix improves throughput by up to 3.53x, reduces inference loss by up to 0.61x, and delivers up to 2.12x higher response time SLO attainment over traditional separate setups. To our knowledge, this is the first work to uncover and exploit the opportunities of joint LLM inference and training, paving the way for more resource-efficient deployment of LLMs in production environments.
Sparse matrix-sparse matrix multiplication (SpGEMM) is a key kernel in many scientific applications and graph workloads. Unfortunately, SpGEMM is bottlenecked by data movement due to its irregular memory access patterns. Significant work has been devoted to developing row reordering schemes towards improving locality in sparse operations, but prior studies mostly focus on the case of sparse-matrix vector multiplication (SpMV). In this paper, we address these issues with hierarchical clustering for SpGEMM that leverages both row reordering and cluster-wise computation to improve reuse in the second input (B) matrix with a novel row-clustered matrix format and access pattern in the first input (A) matrix. We find that hierarchical clustering can speed up SpGEMM by 1.39x on average with low preprocessing cost (less than 20x the cost of a single SpGEMM on about 90% of inputs). Furthermore, we decouple the reordering algorithm from the clustered matrix format so they can be applied as independent optimizations. Additionally, this paper sheds light on the role of both row reordering and clustering independently and together for SpGEMM with a comprehensive empirical study of the effect of 10 different reordering algorithms and 3 clustering schemes on SpGEMM performance on a suite of 110 matrices. We find that reordering based on graph partitioning provides better SpGEMM performance than existing alternatives at the cost of high preprocessing time. The evaluation demonstrates that the proposed hierarchical clustering method achieves greater average speedup compared to other reordering schemes with similar preprocessing times.
Spatial Branch and Bound (B&B) algorithms are widely used for solving nonconvex problems to global optimality, yet they remain computationally expensive. Though some works have been carried out to speed up B&B via CPU parallelization, GPU parallelization is much less explored. In this work, we investigate the design of a spatial B&B algorithm that involves an interval-based GPU-parallel lower bounding solver: The domain of each B&B node is temporarily partitioned into numerous subdomains, then massive GPU parallelism is leveraged to compute interval bounds of the objective function and constraints on each subdomain, using the Mean Value Form. The resulting bounds are tighter than those achieved via regular interval arithmetic without partitioning, but they remain fast to compute. We implement the method into our open-source solver MAiNGO via CUDA in two manners: wrapping all GPU tasks within one kernel function, or distributing the GPU tasks onto a CUDA graph. Numerical experiments show that using more subdomains leads to significantly tighter lower bounds and thus less B&B iterations. Regarding wall clock time, the proposed spatial B&B framework achieves a speedup of three orders of magnitude compared to applying interval arithmetic on the CPU without domain partitioning. Among the two implementations, the one developed with CUDA graph enables higher efficiency. Moreover, in some case studies, the proposed method delivers competitive or better performance compared to MAiNGO's default solver which is based on McCormick relaxations. These results highlight the potential of GPU-accelerated bounding techniques to accelerate B&B algorithms.
Interactive multimodal applications (IMAs), such as route planning in the Internet of Vehicles, enrich users' personalized experiences by integrating various forms of data over wireless networks. Recent advances in large language models (LLMs) utilize mixture-of-experts (MoE) mechanisms to empower multiple IMAs, with each LLM trained individually for a specific task that presents different business workflows. In contrast to existing approaches that rely on multiple LLMs for IMAs, this paper presents a novel paradigm that accomplishes various IMAs using a single compositional LLM over wireless networks. The two primary challenges include 1) guiding a single LLM to adapt to diverse IMA objectives and 2) ensuring the flexibility and efficiency of the LLM in resource-constrained mobile environments. To tackle the first challenge, we propose ContextLoRA, a novel method that guides an LLM to learn the rich structured context among IMAs by constructing a task dependency graph. We partition the learnable parameter matrix of neural layers for each IMA to facilitate LLM composition. Then, we develop a step-by-step fine-tuning procedure guided by task relations, including training, freezing, and masking phases. This allows the LLM to learn to reason among tasks for better adaptation, capturing the latent dependencies between tasks. For the second challenge, we introduce ContextGear, a scheduling strategy to optimize the training procedure of ContextLoRA, aiming to minimize computational and communication costs through a strategic grouping mechanism. Experiments on three benchmarks show the superiority of the proposed ContextLoRA and ContextGear. Furthermore, we prototype our proposed paradigm on a real-world wireless testbed, demonstrating its practical applicability for various IMAs. We will release our code to the community.
Efficient memory management in heterogeneous systems is increasingly challenging due to diverse compute architectures (e.g., CPU, GPU, FPGA) and dynamic task mappings not known at compile time. Existing approaches often require programmers to manage data placement and transfers explicitly, or assume static mappings that limit portability and scalability. This paper introduces RIMMS (Runtime Integrated Memory Management System), a lightweight, runtime-managed, hardware-agnostic memory abstraction layer that decouples application development from low-level memory operations. RIMMS transparently tracks data locations, manages consistency, and supports efficient memory allocation across heterogeneous compute elements without requiring platform-specific tuning or code modifications. We integrate RIMMS into a baseline runtime and evaluate with complete radar signal processing applications across CPU+GPU and CPU+FPGA platforms. RIMMS delivers up to 2.43X speedup on GPU-based and 1.82X on FPGA-based systems over the baseline. Compared to IRIS, a recent heterogeneous runtime system, RIMMS achieves up to 3.08X speedup and matches the performance of native CUDA implementations while significantly reducing programming complexity. Despite operating at a higher abstraction level, RIMMS incurs only 1-2 cycles of overhead per memory management call, making it a low-cost solution. These results demonstrate RIMMS's ability to deliver high performance and enhanced programmer productivity in dynamic, real-world heterogeneous environments.
We study centralized distributed data parallel training of deep neural networks (DNNs), aiming to improve the trade-off between communication efficiency and model performance of the local gradient methods. To this end, we revisit the flat-minima hypothesis, which suggests that models with better generalization tend to lie in flatter regions of the loss landscape. We introduce a simple, yet effective, sharpness measure, Inverse Mean Valley, and demonstrate its strong correlation with the generalization gap of DNNs. We incorporate an efficient relaxation of this measure into the distributed training objective as a lightweight regularizer that encourages workers to collaboratively seek wide minima. The regularizer exerts a pushing force that counteracts the consensus step pulling the workers together, giving rise to the Distributed Pull-Push Force (DPPF) algorithm. Empirically, we show that DPPF outperforms other communication-efficient approaches and achieves better generalization performance than local gradient methods and synchronous gradient averaging, while significantly reducing communication overhead. In addition, our loss landscape visualizations confirm the ability of DPPF to locate flatter minima. On the theoretical side, we show that DPPF guides workers to span flat valleys, with the final valley width governed by the interplay between push and pull strengths, and that its pull-push dynamics is self-stabilizing. We further provide generalization guarantees linked to the valley width and prove convergence in the non-convex setting.
Scientific and data science applications are becoming increasingly complex, with growing computational and memory demands. Modern high performance computing (HPC) systems provide high parallelism and heterogeneity across nodes, devices, and cores. To achieve good performance, effective scheduling and load balancing techniques are essential. Parallel programming frameworks such as OpenMP now offer a variety of advanced scheduling algorithms to support diverse applications and platforms. This creates an instance of the scheduling algorithm selection problem, which involves identifying the most suitable algorithm for a given combination of workload and system characteristics. In this work, we explore learning-based approaches for selecting scheduling algorithms in OpenMP. We propose and evaluate expert-based and reinforcement learning (RL)-based methods, and conduct a detailed performance analysis across six applications and three systems. Our results show that RL methods are capable of learning high-performing scheduling decisions, although they require significant exploration, with the choice of reward function playing a key role. Expert-based methods, in contrast, rely on prior knowledge and involve less exploration, though they may not always identify the optimal algorithm for a specific application-system pair. By combining expert knowledge with RL-based learning, we achieve improved performance and greater adaptability. Overall, this work demonstrates that dynamic selection of scheduling algorithms during execution is both viable and beneficial for OpenMP applications. The approach can also be extended to MPI-based programs, enabling optimization of scheduling decisions across multiple levels of parallelism.
Leader election is a fundamental problem in distributed computing, particularly within programmable matter systems, where coordination among simple computational entities is crucial for solving complex tasks. In these systems, particles (i.e., constant memory computational entities) operate in a regular triangular grid as described in the geometric Amoebot model. While leader election has been extensively studied in non self-stabilising settings, self-stabilising solutions remain more limited. In this work, we study the problem of self-stabilising leader election in connected (but not necessarily simply connected) configurations. We present the first self-stabilising algorithm for programmable matter that guarantees the election of a unique leader under an unfair scheduler, assuming particles share a common sense of direction. Our approach leverages particle movement, a capability not previously exploited in the self-stabilising context. We show that movement in conjunction with particles operating in a grid can overcome classical impossibility results for constant-memory systems established by Dolev et al.
Ethereum, a leading blockchain platform, has revolutionized the digital economy by enabling decentralized transactions and the execution of smart contracts. Ethereum transactions form the backbone of its network, facilitating peer-to-peer exchanges and interactions with complex decentralized applications. Smart contracts extend Ethereum's capabilities by automating processes and enabling trustless execution of agreements. Hence, understanding how these smart contracts interact is important in order to facilitate various performance optimizations, such as warming objects before they are being accessed and enabling concurrent execution. Of particular interest to us are the development of the calling graph, as well as the read sets and write sets of invocations within the same block, and the properties of the associated conflict graph that is derived from them. The latter is important for understanding the parallelization potential of smart contracts on Ethereum. We traced upwards of 2 million recent Ethereum blocks using call tracer and prestate tracer, out of a total of 21.4 million blocks at the time of writing. We report on the transactions per block distribution, the structure of call trees in smart contract invocations, the ratio of value-transfer transactions to smart contract invocations, as well as provide a comprehensive study of the structure of blocks' conflict graphs. We find that conflict graphs predominantly show a star like configuration, as well as other noteworthy structural properties.
This paper presents an in-depth investigation into the high-performance parallel optimization of the Fish School Behaviour (FSB) algorithm on the Setonix supercomputing platform using the OpenMP framework. Given the increasing demand for enhanced computational capabilities for complex, large-scale calculations across diverse domains, there's an imperative need for optimized parallel algorithms and computing structures. The FSB algorithm, inspired by nature's social behavior patterns, provides an ideal platform for parallelization due to its iterative and computationally intensive nature. This study leverages the capabilities of the Setonix platform and the OpenMP framework to analyze various aspects of multi-threading, such as thread counts, scheduling strategies, and OpenMP constructs, aiming to discern patterns and strategies that can elevate program performance. Experiments were designed to rigorously test different configurations, and our results not only offer insights for parallel optimization of FSB on Setonix but also provide valuable references for other parallel computational research using OpenMP. Looking forward, other factors, such as cache behavior and thread scheduling strategies at micro and macro levels, hold potential for further exploration and optimization.
Efficient container image distribution is crucial for enabling machine learning inference at the network edge, where resource limitations and dynamic network conditions create significant challenges. In this paper, we present PeerSync, a decentralized P2P-based system designed to optimize image distribution in edge environments. PeerSync employs a popularity- and network-aware download engine that dynamically adapts to content popularity and real-time network conditions using a sliding window mechanism. PeerSync further integrates automated tracker election for rapid peer discovery and dynamic cache management for efficient storage utilization. We implement PeerSync with 8000+ lines of Rust code and test its performance extensively on both physical edge devices and Docker-based emulations. Experimental results show that PeerSync delivers a remarkable speed increase of 2.72$\times$, 1.79$\times$, and 1.28$\times$ compared to the Baseline, Dragonfly, and Kraken, respectively, while significantly reducing peak cross-network traffic by 90.72\% under congested and varying network conditions.
The paradigm shift towards multi-core and heterogeneous computing, driven by the fundamental power and thermal limits of single-core processors, has established energy efficiency as a first-class design constraint in high-performance computing (HPC). Heterogeneous systems, integrating traditional multi-core CPUs with specialized accelerators like discrete (dGPU) and integrated (iGPU) graphics processing units, offer a compelling path to navigating the trade-offs between performance and power. However, quantifying these trade-offs on widely accessible hardware remains a critical area of study. This paper presents a direct, empirical measurement of the performance and energy-to-solution of a canonical HPC workload -- a 4096x4096 matrix-matrix multiplication -- on three distinct compute architectures within a single consumer-grade laptop: a multi-core AMD Ryzen 7 5800H CPU, a discrete NVIDIA GeForce GTX 1650 GPU, and an integrated AMD Radeon Vega GPU. Using standard, validated, and minimally intrusive tools such as Linux perf and nvidia-smi, we find that the discrete GPU is not only the performance leader, achieving a 93.5x speedup over the CPU, but is also the most energy-efficient, consuming only 2% of the energy used by the CPU, resulting in a 50-fold improvement in energy efficiency. These findings provide a practical demonstration of the "race to idle" principle and offer clear, quantitative guidance on architectural choices for energy-aware software development.