Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Modern tensor applications, especially foundation models and generative AI applications require multiple input modalities (both vision and language), which increases the demand for flexible accelerator architecture. Existing frameworks suffer from the trade-off between design flexibility and productivity of RTL generation: either limited to very few hand-written templates or cannot automatically generate the RTL. To address this challenge, we propose the LEGO framework, which targets tensor applications and automatically generates spatial architecture design and outputs synthesizable RTL code without handwritten RTL design templates. Leveraging the affine-transformation-based architecture representation, LEGO front end finds interconnections between function units, synthesizes the memory system, and fuses different spatial dataflow designs based on data reuse analysis. LEGO back end then translates the hardware in a primitive-level graph to perform lower-level optimizations, and applies a set of linear-programming algorithms to optimally insert pipeline registers and reduce the overhead of unused logic when switching spatial dataflows. Our evaluation demonstrates that LEGO can achieve 3.2x speedup and 2.4x energy efficiency compared to previous work Gemmini, and can generate one architecture for diverse modern foundation models in generative AI applications.
Wireless techniques for monitoring human vital signs, such as heart and breathing rates, offer a promising solution in the context of joint communication and sensing (JCAS) with applications in medicine, sports, safety, security, and even the military. This paper reports experimental results obtained at the Fraunhofer Institute for Integrated Circuits in Ilmenau, demonstrating the effectiveness of an indoor orthogonal frequency-division multiplexing (OFDM) JCAS system for detecting human heart and breathing rates. The system operated in a bistatic configuration at an FR2 frequency of 26.5 GHz with a variable bandwidth of up to 1 GHz. Measurements were taken under various scenarios, including a subject lying down, sitting, or walking, in both line-of-sight and non-line-of-sight conditions, and with one or two subjects present simultaneously. The results indicate that while vital sign detection is generally feasible, its effectiveness is influenced by several factors, such as the subjects clothing, activity, as well as the distance and angle relative to the sensing system. In addition, no significant influence of bandwidth was detected since the vital signs information is encoded in the phase of the signal.
Solving sparse systems of linear equations is a fundamental problem in the field of numerical methods, with applications spanning from circuit design to urban planning. These problems can have millions of constraints, such as when laying out transistors on a circuit, or trying to optimize traffic light timings, making fast sparse solvers extremely important. However, existing state-of-the-art software-level solutions for solving sparse linear systems, termed iterative solvers, are extremely inefficient on current hardware. This inefficiency can be attributed to two key reasons: (1) poor short-term data reuse, which causes frequent, irregular memory accesses, and (2) complex data dependencies, which limit parallelism. Hence, in this paper, we present an FPGA implementation of the existing Azul accelerator, an SRAM-only hardware accelerator that achieves both high memory bandwidth utilization and arithmetic intensity. Azul features a grid of tiles, each of which is composed of a processing element (PE) and a small independent SRAM memory, which are all connected over a network on chip (NoC). We implement Azul on FPGA using simple RISC-V CPU cores connected to a memory hierarchy of different FPGA memory modules. We utilize custom RISC-V ISA augmentations to implement a task-based programming model for the various PEs, allowing communication over the NoC. Finally, we design simple distributed test cases so that we can functionally verify the FPGA implementation, verifying equivalent performance to an architectural simulation of the Azul framework.
We present a design for an extensible video conferencing stack implemented entirely in hardware on a Nexys4 DDR FPGA, which uses the M-JPEG codec to compress video and a UDP networking stack to communicate between the FPGA and the receiving computer. This networking stack accepts real-time updates from both the video codec and the audio controller, which means that video will be able to be streamed at 30 FPS from the FPGA to a computer. On the computer side, a Python script reads the Ethernet packets and decodes the packets into the video and the audio for real time playback. We evaluate this architecture using both functional, simulation-driven verification in Cocotb and by synthesizing SystemVerilog RTL code using Vivado for deployment on our Nexys4 DDR FPGA, where we evaluate both end-to-end latency and throughput of video transmission.
Statistical computations are becoming increasingly important. These computations often need to be performed in log-space because probabilities become extremely small due to repeated multiplications. While using logarithms effectively prevents numerical underflow, this paper shows that its cost is high in performance, resource utilization, and, notably, numerical accuracy. This paper then argues that using posit, a recently proposed floating-point format, is a better strategy for statistical computations operating on extremely small numbers because of its unique encoding mechanism. To that end, this paper performs a comprehensive analysis comparing posit, binary64, and logarithm representations, examining both individual arithmetic operations, statistical bioinformatics applications, and their accelerators. FPGA implementation results highlight that posit-based accelerators can achieve up to two orders of magnitude higher accuracy, up to 60\% lower resource utilization, and up to $1.3\times$ speedup, compared to log-space accelerators. Such improvement translates to $2\times$ performance per unit resource on the FPGA.
The Versatile Video Coding (VVC) standard significantly improves compression efficiency over its predecessor, HEVC, but at the cost of substantially higher computational complexity, particularly in intra-frame prediction. This stage employs various directional modes, each requiring multiple multiplications between reference samples and constant coefficients. To optimize these operations at hardware accelerators, multiplierless constant multiplication (MCM) blocks offer a promising solution. However, VVC's interpolation filters have more than fifty distinct coefficients, making MCM implementations resource-intensive. This work proposes an approximation method to reduce the number of interpolation coefficients by averaging fixed subsets of them, therefore decreasing MCM block size and potentially lowering circuit area and power consumption. Six different MCM block architectures for angular intra prediction are introduced, in which five use the approximation method introduced in this work, and evaluate the trade-off between coefficient reduction and coding efficiency compared with a conventional multiplier architecture. Experimental results in ten videos demonstrate that only two MCM implementations exceed a 4% BD-Rate increase and 2.6% on average in the worst case, while two of the MCM implementations have circuit area reduction of 20% and 44%. For three of the architectures, parallel sample prediction modules were synthesized, showing a reduction of 30% gate area compared to single sample processing units, and a reduction in energy consumption for two of the implementations.
Hardware prefetching is critical to fill the performance gap between CPU speeds and slower memory accesses. With multicore architectures becoming commonplace, traditional prefetchers are severely challenged. Independent core operation creates significant redundancy (up to 20% of prefetch requests are duplicates), causing unnecessary memory bus traffic and wasted bandwidth. Furthermore, cutting-edge prefetchers such as Pythia suffer from about a 10% performance loss when scaling from a single-core to a four-core system. To solve these problems, we propose CRL-Pythia, a coordinated reinforcement learning based prefetcher specifically designed for multicore systems. In this work, CRL-Pythia addresses these issues by enabling cross-core sharing of information and cooperative prefetching decisions, which greatly reduces redundant prefetch requests and improves learning convergence across cores. Our experiments demonstrate that CRL-Pythia outperforms single Pythia configurations in all cases, with approximately 12% IPC (instructions per cycle) improvement for bandwidth-constrained workloads, while imposing moderate hardware overhead. Our sensitivity analyses also verify its robustness and scalability, thereby making CRL-Pythia a practical and efficient solution to contemporary multicore systems.
Over the past decade, AR/VR devices have drastically changed how we interact with the digital world. Users often share sensitive information, such as their location, browsing history, and even financial data, within third-party apps installed on these devices, assuming a secure environment protected from malicious actors. Recent research has revealed that malicious apps can exploit such capabilities and monitor benign apps to track user activities, leveraging fine-grained profiling tools, such as performance counter APIs. However, app-to-app monitoring is not feasible on all AR/VR devices (e.g., Meta Quest), as a concurrent standalone app execution is disabled. In this paper, we present OVRWatcher, a novel side-channel primitive for AR/VR devices that infers user activities by monitoring low-resolution (1Hz) GPU usage via a background script, unlike prior work that relies on high-resolution profiling. OVRWatcher captures correlations between GPU metrics and 3D object interactions under varying speeds, distances, and rendering scenarios, without requiring concurrent app execution, access to application data, or additional SDK installations. We demonstrate the efficacy of OVRWatcher in fingerprinting both standalone AR/VR and WebXR applications. OVRWatcher also distinguishes virtual objects, such as products in immersive shopping apps selected by real users and the number of participants in virtual meetings, thereby revealing users' product preferences and potentially exposing confidential information from those meetings. OVRWatcher achieves over 99% accuracy in app fingerprinting and over 98% accuracy in object-level inference.
In the hardware design space exploration process, it is critical to optimize both hardware parameters and algorithm-to-hardware mappings. Previous work has largely approached this simultaneous optimization problem by separately exploring the hardware design space and the mapspace - both individually large and highly nonconvex spaces - independently. The resulting combinatorial explosion has created significant difficulties for optimizers. In this paper, we introduce DOSA, which consists of differentiable performance models and a gradient descent-based optimization technique to simultaneously explore both spaces and identify high-performing design points. Experimental results demonstrate that DOSA outperforms random search and Bayesian optimization by 2.80x and 12.59x, respectively, in improving DNN model energy-delay product, given a similar number of samples. We also demonstrate the modularity and flexibility of DOSA by augmenting our analytical model with a learned model, allowing us to optimize buffer sizes and mappings of a real DNN accelerator and attain a 1.82x improvement in energy-delay product.
Deep learning-based recommendation models (DLRMs) are widely deployed in commercial applications to enhance user experience. However, the large and sparse embedding layers in these models impose substantial memory bandwidth bottlenecks due to high memory access costs and irregular access patterns, leading to increased inference time and energy consumption. While resistive random access memory (ReRAM) based crossbars offer a fast and energy-efficient solution through in-memory embedding reduction operations, naively mapping embeddings onto crossbar arrays leads to poor crossbar utilization and thus degrades performance. We present ReCross, an efficient ReRAM-based in-memory computing (IMC) scheme designed to minimize execution time and enhance energy efficiency in DLRM embedding reduction. ReCross co-optimizes embedding access patterns and ReRAM crossbar characteristics by intelligently grouping and mapping co-occurring embeddings, replicating frequently accessed embeddings across crossbars, and dynamically selecting in-memory processing operations using a newly designed dynamic switch ADC circuit that considers runtime energy trade-offs. Experimental results demonstrate that ReCross achieves a 3.97x reduction in execution time and a 6.1x improvement in energy efficiency compared to state-of-the-art IMC approaches.
Verification is a critical process for ensuring the correctness of modern processors. The increasing complexity of processor designs and the emergence of new instruction set architectures (ISAs) like RISC-V have created demands for more agile and efficient verification methodologies, particularly regarding verification efficiency and faster coverage convergence. While simulation-based approaches now attempt to incorporate advanced software testing techniques such as fuzzing to improve coverage, they face significant limitations when applied to processor verification, notably poor performance and inadequate test case quality. Hardware-accelerated solutions using FPGA or ASIC platforms have tried to address these issues, yet they struggle with challenges including host-FPGA communication overhead, inefficient test pattern generation, and suboptimal implementation of the entire multi-step verification process. In this paper, we present TurboFuzz, an end-to-end hardware-accelerated verification framework that implements the entire Test Generation-Simulation-Coverage Feedback loop on a single FPGA for modern processor verification. TurboFuzz enhances test quality through optimized test case (seed) control flow, efficient inter-seed scheduling, and hybrid fuzzer integration, thereby improving coverage and execution efficiency. Additionally, it employs a feedback-driven generation mechanism to accelerate coverage convergence. Experimental results show that TurboFuzz achieves up to 2.23x more coverage collection than software-based fuzzers within the same time budget, and up to 571x performance speedup when detecting real-world issues, while maintaining full visibility and debugging capabilities with moderate area overhead.
Large language models (LLMs) face significant inference latency due to inefficiencies in GEMM operations, weight access, and KV cache access, especially in real-time scenarios. This highlights the need for a versatile compute-memory efficient accelerator. Unfortunately, existing Transformer accelerators struggle to address both aspects simultaneously, as they focus on value-level processing, missing fine-grained opportunities to optimize computation and memory collaboratively. This paper introduces MCBP, a bit-grained compute-memory efficient algorithm-hardware co-design that leverages bit-slice (BS) enabled repetitiveness and sparsity to accelerate LLM inference. MCBP features three key innovations: 1) BS-repetitiveness-enabled computation reduction (BRCR), which eliminates redundant GEMM computations via leveraging redundancy hidden among BS vectors; 2) BS-sparsity-enabled two-state coding (BSTC), which reduces weight access via exploiting significant sparsity in high-order bit-slice weight; 3) Bit-grained progressive prediction (BGPP), which reduces KV cache access by leveraging early-termination-based bit-grained prediction. These techniques, supported by custom accelerator designs, effectively alleviate the burden in GEMM, weight access, and KV cache access. Extensive experiments on 26 benchmarks show that MCBP achieves 9.43x speed up and 31.1x higher energy efficiency than Nvidia A100 GPU. Compared to SOTA Transformer accelerators, MCBP achieves 35x, 5.2x and 3.2x energy saving than Spatten, FACT and SOFA, respectively.
Pairing-based cryptography (PBC) is crucial in modern cryptographic applications. With the rapid advancement of adversarial research and the growing diversity of application requirements, PBC accelerators need regular updates in algorithms, parameter configurations, and hardware design. However, traditional design methodologies face significant challenges, including prolonged design cycles, difficulties in balancing performance and flexibility, and insufficient support for potential architectural exploration. To address these challenges, we introduce Finesse, an agile design framework based on co-design methodology. Finesse leverages a co-optimization cycle driven by a specialized compiler and a multi-granularity hardware simulator, enabling both optimized performance metrics and effective design space exploration. Furthermore, Finesse adopts a modular design flow to significantly shorten design cycles, while its versatile abstraction ensures flexibility across various curve families and hardware architectures. Finesse offers flexibility, efficiency, and rapid prototyping, comparing with previous frameworks. With compilation times reduced to minutes, Finesse enables faster iteration cycles and streamlined hardware-software co-design. Experiments on popular curves demonstrate its effectiveness, achieving $34\times$ improvement in throughput and $6.2\times$ increase in area efficiency compared to previous flexible frameworks, while outperforming state-of-the-art non-flexible ASIC designs with a $3\times$ gain in throughput and $3.2\times$ improvement in area efficiency.
Developing efficient hardware accelerators for mathematical kernels used in scientific applications and machine learning has traditionally been a labor-intensive task. These accelerators typically require low-level programming in Verilog or other hardware description languages, along with significant manual optimization effort. Recently, to alleviate this challenge, high-level hardware design tools like Chisel and High-Level Synthesis have emerged. However, as with any compiler, some of the generated hardware may be suboptimal compared to expert-crafted designs. Understanding where these inefficiencies arise is crucial, as it provides valuable insights for both users and tool developers. In this paper, we propose a methodology to hierarchically decompose mathematical kernels - such as Fourier transforms, matrix multiplication, and QR factorization - into a set of common building blocks or primitives. Then the primitives are implemented in the different programming environments, and the larger algorithms get assembled. Furthermore, we employ an automatic approach to investigate the achievable frequency and required resources. Performing this experimentation at each level will provide fairer comparisons between designs and offer guidance for both tool developers and hardware designers to adopt better practices.
LLMs now form the backbone of AI agents for a diverse array of applications, including tool use, command-line agents, and web or computer use agents. These agentic LLM inference tasks are fundamentally different from chatbot-focused inference -- they often have much larger context lengths to capture complex, prolonged inputs, such as entire webpage DOMs or complicated tool call trajectories. This, in turn, generates significant off-chip memory traffic for the underlying hardware at the inference stage and causes the workload to be constrained by two memory walls, namely the bandwidth and capacity memory walls, preventing the on-chip compute units from achieving high utilization. In this paper, we introduce PLENA, a hardware-software co-designed system that applies three core optimization pathways to tackle these challenges. PLENA includes an efficient hardware implementation of compute and memory units supporting an asymmetric quantization scheme. PLENA also features a novel flattened systolic array architecture that has native support for FlashAttention to tackle these memory walls in the scenario of inference serving for long-context LLMs. Additionally, PLENA is developed with a complete stack, including a custom ISA, a compiler, a cycle-emulated simulator, and an automated design space exploration flow. The simulated results show that PLENA achieves up to 8.5x higher utilization than existing accelerators, and delivers 2.24x higher throughput than the A100 GPU and 3.85x higher throughput than the TPU v6e, under the same multiplier count and memory settings. The full PLENA system will also be open-sourced.
Wallace tree multipliers are a parallel digital multiplier architecture designed to minimize the worst-case time complexity of the circuit depth relative to the input size [1]. In particular, it seeks to perform long multiplication in the binary sense, reducing as many partial products per stage as possible through full and half adders circuits, achieving O(log(n)) where n = bit length of input. This paper provides an overview of the design, progress and methodology in the final project of ECE 55900, consisting of the schematic and layout of a Wallace tree 8-bit input multiplier on the gpdk45 technology in Cadence Virtuoso, as well as any design attempts prior to the final product. This also includes our endeavors in designing the final MAC (Multiply Accumulate) unit with undefined targets, which we chose to implement as a 16 bit combinational multiply-add.
Authors of cryptographic software are well aware that their code should not leak secrets through its timing behavior, and, until 2018, they believed that following industry-standard constant-time coding guidelines was sufficient. However, the revelation of the Spectre family of speculative execution attacks injected new complexities. To block speculative attacks, prior work has proposed annotating the program's source code to mark secret data, with hardware using this information to decide when to speculate (i.e., when only public values are involved) or not (when secrets are in play). While these solutions are able to track secret information stored on the heap, they suffer from limitations that prevent them from correctly tracking secrets on the stack, at a cost in performance. This paper introduces SecSep, a transformation framework that rewrites assembly programs so that they partition secret and public data on the stack. By moving from the source-code level to assembly rewriting, SecSep is able to address limitations of prior work. The key challenge in performing this assembly rewriting stems from the loss of semantic information through the lengthy compilation process. The key innovation of our methodology is a new variant of typed assembly language (TAL), Octal, which allows us to address this challenge. Assembly rewriting is driven by compile-time inference within Octal. We apply our technique to cryptographic programs and demonstrate that it enables secure speculation efficiently, incurring a low average overhead of $1.2\%$.
Compute-in-Read-Only-Memory (CiROM) accelerators offer outstanding energy efficiency for CNNs by eliminating runtime weight updates. However, their scalability to Large Language Models (LLMs) is fundamentally constrained by their vast parameter sizes. Notably, LLaMA-7B - the smallest model in LLaMA series - demands more than 1,000 cm2 of silicon area even in advanced CMOS nodes. This paper presents BitROM, the first CiROM-based accelerator that overcomes this limitation through co-design with BitNet's 1.58-bit quantization model, enabling practical and efficient LLM inference at the edge. BitROM introduces three key innovations: 1) a novel Bidirectional ROM Array that stores two ternary weights per transistor; 2) a Tri-Mode Local Accumulator optimized for ternary-weight computations; and 3) an integrated Decode-Refresh (DR) eDRAM that supports on-die KV-cache management, significantly reducing external memory access during decoding. In addition, BitROM integrates LoRA-based adapters to enable efficient transfer learning across various downstream tasks. Evaluated in 65nm CMOS, BitROM achieves 20.8 TOPS/W and a bit density of 4,967 kB/mm2 - offering a 10x improvement in area efficiency over prior digital CiROM designs. Moreover, the DR eDRAM contributes to a 43.6% reduction in external DRAM access, further enhancing deployment efficiency for LLMs in edge applications.