Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Microservice architectures are increasingly prevalent in organisations providing online applications. Recent studies have begun to explore the characteristics of real-world large-scale microservice deployments; however, their operational complexities, and the degree to which this complexities are consistent across different deployments, remains under-explored. In this paper, we analyse a microservice dataset released by Alibaba along three dimensions of complexity: scale, heterogeneity, and dynamicity. We find that large-scale deployments can consist of tens of thousands of microservices, that support an even broader array of front-end functionality. Moreover, our analysis shows wide-spread long-tailed distributions of characteristics between microservices, such as share of workload and dependencies, highlighting inequality across the deployment. This diversity is also reflected in call graphs, where we find that whilst front-end services produce dominant call graphs, rarer non-dominant call graphs are prevalent and could involve dissimilar microservice calls. We also find that runtime dependencies between microservices deviate from the static view of system dependencies, and that the deployment undergoes daily changes to microservices. We discuss the implications of our findings for state-of-the-art research in microservice management and research testbed realism, and compare our results to previous descriptions of large-scale microservice deployments to begin to build an understanding of their commonalities.
We design a deterministic distributed $\mathcal{O}(\log n)$-round reduction from the $(2\Delta-2)$-edge coloring problem to the much easier $(2\Delta-1)$-edge coloring problem. This is almost optimal, as the $(2\Delta-2)$-edge coloring problem admits an $\Omega(\log_\Delta n)$ lower bound. Further, we also obtain an optimal $\mathcal{O}(\log_\Delta n)$-round reduction, albeit to the harder maximal independent set (MIS) problem. The current state-of-the-art for $(2\Delta - 1)$-edge coloring actually comes from an MIS algorithm by [Ghaffari \& Grunau, FOCS'24], which runs in $\widetilde{\mathcal{O}}(\log^{5/3} n)$ rounds. With our new reduction, this round complexity now carries over to the $(2\Delta - 2)$-edge coloring problem as well. Alternatively, one can also plug in the $(\mathrm{poly} \log \Delta + \mathcal{O}(\log^{\ast} n))$-round $(2\Delta - 1)$-edge coloring algorithm from [Balliu, Brandt, Kuhn \& Olivetti, PODC'22], which yields an optimal runtime of $\mathcal{O}(\log n)$ rounds for $\Delta \leq \mathrm{poly} \log n$. Previously, the fastest deterministic algorithm using less than $2\Delta - 1$ colors for general graphs by [Brandt, Maus, Narayanan, Schager \& Uitto, SODA'25] ran in $\widetilde{\mathcal{O}}(\log^3 n)$ rounds. In addition, we also obtain a $\mathcal{O}(\log \log n)$-round randomized reduction of $(2\Delta - 2)$-edge coloring to $(2\Delta - 1)$-edge coloring. This improves upon the (very recent) best randomized algorithm using less than $2\Delta - 1$ colors from [Bourreau, Brandt \& Nolin, STOC'25] by reducing the round complexity from $\widetilde{\mathcal{O}}(\log^{8/3}\log n)$ down to $\widetilde{\mathcal{O}}(\log^{5/3} \log n)$.
Asynchronous Byzantine Fault Tolerant (BFT) consensus protocols have garnered significant attention with the rise of blockchain technology. A typical asynchronous protocol is designed by executing sequential instances of the Asynchronous Common Sub-seQuence (ACSQ). The ACSQ protocol consists of two primary components: the Asynchronous Common Subset (ACS) protocol and a block sorting mechanism, with the ACS protocol comprising two stages: broadcast and agreement. However, current protocols encounter three critical issues: high latency arising from the execution of the agreement stage, latency instability due to the integral-sorting mechanism, and reduced throughput caused by block discarding. To address these issues,we propose Falcon, an asynchronous BFT protocol that achieves low latency and enhanced throughput. Falcon introduces a novel broadcast protocol, Graded Broadcast (GBC), which enables a block to be included in the ACS set directly, bypassing the agreement stage and thereby reducing latency. To ensure safety, Falcon incorporates a new binary agreement protocol called Asymmetrical Asynchronous Binary Agreement (AABA), designed to complement GBC. Additionally, Falcon employs a partial-sorting mechanism, allowing continuous rather than simultaneous block committing, enhancing latency stability. Finally, we incorporate an agreement trigger that, before its activation, enables nodes to wait for more blocks to be delivered and committed, thereby boosting throughput. We conduct a series of experiments to evaluate Falcon, demonstrating its superior performance.
Decentralized Federated Learning (DFL) eliminates the reliance on the server-client architecture inherent in traditional federated learning, attracting significant research interest in recent years. Simultaneously, the objective functions in machine learning tasks are often nonconvex and frequently incorporate additional, potentially nonsmooth regularization terms to satisfy practical requirements, thereby forming nonconvex composite optimization problems. Employing DFL methods to solve such general optimization problems leads to the formulation of Decentralized Nonconvex Composite Federated Learning (DNCFL), a topic that remains largely underexplored. In this paper, we propose a novel DNCFL algorithm, termed \bf{DEPOSITUM}. Built upon proximal stochastic gradient tracking, DEPOSITUM mitigates the impact of data heterogeneity by enabling clients to approximate the global gradient. The introduction of momentums in the proximal gradient descent step, replacing tracking variables, reduces the variance introduced by stochastic gradients. Additionally, DEPOSITUM supports local updates of client variables, significantly reducing communication costs. Theoretical analysis demonstrates that DEPOSITUM achieves an expected $\epsilon$-stationary point with an iteration complexity of $\mathcal{O}(1/\epsilon^2)$. The proximal gradient, consensus errors, and gradient estimation errors decrease at a sublinear rate of $\mathcal{O}(1/T)$. With appropriate parameter selection, the algorithm achieves network-independent linear speedup without requiring mega-batch sampling. Finally, we apply DEPOSITUM to the training of neural networks on real-world datasets, systematically examining the influence of various hyperparameters on its performance. Comparisons with other federated composite optimization algorithms validate the effectiveness of the proposed method.
This paper presents an adversary model and a simulation framework specifically tailored for analyzing attacks on distributed systems composed of multiple distributed protocols, with a focus on assessing the security of blockchain networks. Our model classifies and constrains adversarial actions based on the assumptions of the target protocols, defined by failure models, communication models, and the fault tolerance thresholds of Byzantine Fault Tolerant (BFT) protocols. The goal is to study not only the intended effects of adversarial strategies but also their unintended side effects on critical system properties. We apply this framework to analyze fairness properties in a Hyperledger Fabric (HF) blockchain network. Our focus is on novel fairness attacks that involve coordinated adversarial actions across various HF services. Simulations show that even a constrained adversary can violate fairness with respect to specific clients (client fairness) and impact related guarantees (order fairness), which relate the reception order of transactions to their final order in the blockchain. This paper significantly extends our previous work by introducing and evaluating a mitigation mechanism specifically designed to counter transaction reordering attacks. We implement and integrate this defense into our simulation environment, demonstrating its effectiveness under diverse conditions.
Fine-tuning plays a crucial role in adapting models to downstream tasks with minimal training efforts. However, the rapidly increasing size of foundation models poses a daunting challenge for accommodating foundation model fine-tuning in most commercial devices, which often have limited memory bandwidth. Techniques like model sharding and tensor parallelism address this issue by distributing computation across multiple devices to meet memory requirements. Nevertheless, these methods do not fully leverage their foundation nature in facilitating the fine-tuning process, resulting in high computational costs and imbalanced workloads. We introduce a novel Distributed Dynamic Fine-Tuning (D2FT) framework that strategically orchestrates operations across attention modules based on our observation that not all attention modules are necessary for forward and backward propagation in fine-tuning foundation models. Through three innovative selection strategies, D2FT significantly reduces the computational workload required for fine-tuning foundation models. Furthermore, D2FT addresses workload imbalances in distributed computing environments by optimizing these selection strategies via multiple knapsack optimization. Our experimental results demonstrate that the proposed D2FT framework reduces the training computational costs by 40% and training communication costs by 50% with only 1% to 2% accuracy drops on the CIFAR-10, CIFAR-100, and Stanford Cars datasets. Moreover, the results show that D2FT can be effectively extended to recent LoRA, a state-of-the-art parameter-efficient fine-tuning technique. By reducing 40% computational cost or 50% communication cost, D2FT LoRA top-1 accuracy only drops 4% to 6% on Stanford Cars dataset.
Decentralized federated learning (DFL) is a promising machine learning paradigm for bringing artificial intelligence (AI) capabilities to the network edge. Running DFL on top of edge networks, however, faces severe performance challenges due to the extensive parameter exchanges between agents. Most existing solutions for these challenges were based on simplistic communication models, which cannot capture the case of learning over a multi-hop bandwidth-limited network. In this work, we address this problem by jointly designing the communication scheme for the overlay network formed by the agents and the mixing matrix that controls the communication demands between the agents. By carefully analyzing the properties of our problem, we cast each design problem into a tractable optimization and develop an efficient algorithm with guaranteed performance. Our evaluations based on real topology and data show that the proposed algorithm can reduce the total training time by over $80\%$ compared to the baseline without sacrificing accuracy, while significantly improving the computational efficiency over the state of the art.
The Julia programming language has gained acceptance within the High-Performance Computing (HPC) community due to its ability to tackle two-language problem: Julia code feels as high-level as Python but allows developers to tune it to C-level performance. But to squeeze every drop of performance, Julia needs to integrate with advanced performance analysis tools, also known as profilers. In this work, we present Extrae.jl, a Julia package to interface with the Extrae profiler.
Distributed stream processing systems rely on the dataflow model to define and execute streaming jobs, organizing computations as Directed Acyclic Graphs (DAGs) of operators. Adjusting the parallelism of these operators is crucial to handling fluctuating workloads efficiently while balancing resource usage and processing performance. However, existing methods often fail to effectively utilize execution histories or fully exploit DAG structures, limiting their ability to identity bottlenecks and determine the optimal parallelism. In this paper, we propose StreamTune, a novel approach for adaptive paralelism tuning in stream processing systems. StreamTune incorporates a pre-training and fine-tuning framework that leverages global knowledge from historical execution data for job-specific parallelism tuning. In the pre-training phase, Stream Tune clusters the historical data with Graph Edit Distance and pre-trains a Graph Neural Networkbased encoder per cluster to capture the correlation between the operator parallelism, DAG structures, and the identified operator-level bottlenecks. In the online tuning phase, StreamTune iteratively refines operator parallelism recommendations using an operator-level bottleneck prediction model enforced with a monotonic constraint, which aligns with the observed system performance behavior. Evaluation results demonstrate that StreamTune reduces reconfigurations by up to 29.6% and parallelism degrees by up to 30.8% in Apache Flink under a synthetic workload. In Timely Dataflow, StreamTune achieves up to an 83.3% reduction in parallelism degrees while maintaining comparable processing performance under the Nexmark benchmark, when compared to the state-of-the-art methods.
Context. Microservice-based systems have gained significant attention over the past years. A critical factor for understanding and analyzing the behavior of these systems is the collection of monitoring data such as logs, metrics, and traces. These data modalities can be used for anomaly detection and root cause analysis of failures. In particular, multi-modal methods utilizing several types of this data at once have gained traction in the research community since these three modalities capture different dimensions of system behavior. Aim. We provide a dataset that supports research on anomaly detection and architectural degradation in microservice systems. We generate a comprehensive dataset of logs, metrics, and traces from a production microservice system to enable the exploration of multi-modal fusion methods that integrate multiple data modalities. Method. We dynamically tested the various APIs of the MS-based system, implementing the OAuth2.0 protocol using the Locust tool. For each execution of the prepared test suite, we collect logs and performance metrics for correct and erroneous calls with data labeled according to the error triggered during the call. Contributions. We collected approximately 657,000 individual log files, totaling over two billion log lines. In addition, we collected more than 45 million individual metric files that contain 485 unique metrics. We provide an initial analysis of logs, identify key metrics through PCA, and discuss challenges in collecting traces for this system. Moreover, we highlight the possibilities for making a more fine-grained version of the data set. This work advances anomaly detection in microservice systems using multiple data sources.
This work investigates the data-aware multi-service application placement problem in Cloud-Edge settings. We previously introduced EdgeWise, a hybrid approach that combines declarative programming with Mixed-Integer Linear Programming (MILP) to determine optimal placements that minimise operational costs and unnecessary data transfers. The declarative stage pre-processes infrastructure constraints to improve the efficiency of the MILP solver, achieving optimal placements in terms of operational costs, with significantly reduced execution times. In this extended version, we improve the declarative stage with continuous reasoning, presenting EdgeWiseCR, which enables the system to reuse existing placements and reduce unnecessary recomputation and service migrations. In addition, we conducted an expanded experimental evaluation considering multiple applications, diverse network topologies, and large-scale infrastructures with dynamic failures. The results show that EdgeWiseCR achieves up to 65% faster execution compared to EdgeWise, while preserving placement stability under dynamic conditions.
We present a deterministic parallel multilevel algorithm for balanced hypergraph partitioning that matches the state of the art for non-deterministic algorithms. Deterministic parallel algorithms produce the same result in each invocation, which is crucial for reproducibility. Moreover, determinism is highly desirable in application areas such as VLSI design. While there has been tremendous progress in parallel hypergraph partitioning algorithms recently, deterministic counterparts for high-quality local search techniques are missing. Consequently, solution quality is severely lacking in comparison to the non-deterministic algorithms. In this work we close this gap. First, we present a generalization of the recently proposed Jet refinement algorithm. While Jet is naturally amenable to determinism, significant changes are necessary to achieve competitive performance on hypergraphs. We also propose an improved deterministic rebalancing algorithm for Jet. Moreover, we consider the powerful but slower flow-based refinement and introduce a scheme that enables deterministic results while building upon a non-deterministic maximum flow algorithm. As demonstrated in our thorough experimental evaluation, this results in the first deterministic parallel partitioner that is competitive to the highest quality solvers. With Jet refinement, we match or exceed the quality of Mt-KaHyPar's non-deterministic default configuration while being only 15\% slower on average. We observe self-relative speedups of up to 55 on 64 cores with a 22.5$\times$ average speedup. Our deterministic flow-based refinement exceeds the quality of the non-deterministic variant by roughly 1\% on average but requires 31\% more running time.
Emulating computationally intensive scientific simulations is essential to enable uncertainty quantification, optimization, and decision-making at scale. Gaussian Processes (GPs) offer a flexible and data-efficient foundation for statistical emulation, but their poor scalability limits applicability to large datasets. We introduce the Scaled Block Vecchia (SBV) algorithm for distributed GPU-based systems. SBV integrates the Scaled Vecchia approach for anisotropic input scaling with the Block Vecchia (BV) method to reduce computational and memory complexity while leveraging GPU acceleration techniques for efficient linear algebra operations. To the best of our knowledge, this is the first distributed implementation of any Vecchia-based GP variant. Our implementation employs MPI for inter-node parallelism and the MAGMA library for GPU-accelerated batched matrix computations. We demonstrate the scalability and efficiency of the proposed algorithm through experiments on synthetic and real-world workloads, including a 50M point simulation from a respiratory disease model. SBV achieves near-linear scalability on up to 64 A100 and GH200 GPUs, handles 320M points, and reduces energy use relative to exact GP solvers, establishing SBV as a scalable and energy-efficient framework for emulating large-scale scientific models on GPU-based distributed systems.
Composite federated learning offers a general framework for solving machine learning problems with additional regularization terms. However, many existing methods require clients to perform multiple proximal operations to handle non-smooth terms and their performance are often susceptible to data heterogeneity. To overcome these limitations, we propose a novel composite federated learning algorithm called \textbf{FedCanon}, designed to solve the optimization problems comprising a possibly non-convex loss function and a weakly convex, potentially non-smooth regularization term. By decoupling proximal mappings from local updates, FedCanon requires only a single proximal evaluation on the server per iteration, thereby reducing the overall proximal computation cost. It also introduces control variables that incorporate global gradient information into client updates, which helps mitigate the effects of data heterogeneity. Theoretical analysis demonstrates that FedCanon achieves sublinear convergence rates under general non-convex settings and linear convergence under the Polyak-{\L}ojasiewicz condition, without relying on bounded heterogeneity assumptions. Experiments demonstrate that FedCanon outperforms the state-of-the-art methods in terms of both accuracy and computational efficiency, particularly under heterogeneous data distributions.
Federated Learning (FL) has attracted considerable interest due to growing privacy concerns and regulations like the General Data Protection Regulation (GDPR), which stresses the importance of privacy-preserving and fair machine learning approaches. In FL, model training takes place on decentralized data, so as to allow clients to upload a locally trained model and receive a globally aggregated model without exposing sensitive information. However, challenges related to fairness-such as biases, uneven performance among clients, and the "free rider" issue complicates its adoption. In this paper, we examine the use of Mutual Information (MI)-based loss functions to address these concerns. MI has proven to be a powerful method for measuring dependencies between variables and optimizing deep learning models. By leveraging MI to extract essential features and minimize biases, we aim to improve both the fairness and effectiveness of FL systems. Through extensive benchmarking, we assess the impact of MI-based losses in reducing disparities among clients while enhancing the overall performance of FL.
Performance benchmarking is a common practice in software engineering, particularly when building large-scale, distributed, and data-intensive systems. While cloud environments offer several advantages for running benchmarks, it is often reported that benchmark results can vary significantly between repetitions -- making it difficult to draw reliable conclusions about real-world performance. In this paper, we empirically quantify the impact of cloud performance variability on benchmarking results, focusing on stream processing applications as a representative type of data-intensive, performance-critical system. In a longitudinal study spanning more than three months, we repeatedly executed an application benchmark used in research and development at Dynatrace. This allows us to assess various aspects of performance variability, particularly concerning temporal effects. With approximately 591 hours of experiments, deploying 789 Kubernetes clusters on AWS and executing 2366 benchmarks, this is likely the largest study of its kind and the only one addressing performance from an end-to-end, i.e., application benchmark perspective. Our study confirms that performance variability exists, but it is less pronounced than often assumed (coefficient of variation of < 3.7%). Unlike related studies, we find that performance does exhibit a daily and weekly pattern, although with only small variability (<= 2.5%). Re-using benchmarking infrastructure across multiple repetitions introduces only a slight reduction in result accuracy (<= 2.5 percentage points). These key observations hold consistently across different cloud regions and machine types with different processor architectures. We conclude that for engineers and researchers focused on detecting substantial performance differences (e.g., > 5%) in...
LLM inference is essential for applications like text summarization, translation, and data analysis, but the high cost of GPU instances from Cloud Service Providers (CSPs) like AWS is a major burden. This paper proposes InferSave, a cost-efficient VM selection framework for cloud based LLM inference. InferSave optimizes KV cache offloading based on Service Level Objectives (SLOs) and workload charac teristics, estimating GPU memory needs, and recommending cost-effective VM instances. Additionally, the Compute Time Calibration Function (CTCF) improves instance selection accuracy by adjusting for discrepancies between theoretical and actual GPU performance. Experiments on AWS GPU instances show that selecting lower-cost instances without KV cache offloading improves cost efficiency by up to 73.7% for online workloads, while KV cache offloading saves up to 20.19% for offline workloads.
Existing real-time decoders for surface codes are limited to isolated logical qubits and do not support logical operations involving multiple logical qubits. We present DECONET, a first-of-its-kind decoding system that scales to thousands of logical qubits and supports logical operations implemented through lattice surgery. DECONET organizes compute resources in a network-integrated hybrid tree-grid structure, which results in minimal latency increase and no throughput degradation as the system grows. Specifically, DECONET can be scaled to any arbitrary number of $l$ logical qubits by increasing the compute resources by $O(l \times log(l))$, which provides the required $O(l)$ growth in I/O resources while incurring only an $O(log(l))$ increase in latency-a modest growth that is sufficient for thousands of logical qubits. Moreover, we analytically show that the scaling approach preserves throughput, keeping DECONET backlog-free for any number of logical qubits. We report an exploratory prototype of DECONET, called DECONET/HELIOS, built with five VMK-180 FPGAs, that successfully decodes 100 logical qubits of distance five. For 100 logical qubits, under a phenomenological noise rate of 0.1%, the DECONET/HELIOS has an average latency of 2.40 {\mu}s and an inverse throughput of 0.84 {\mu}s per measurement round.