Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
Natural language interfaces to tabular data must handle ambiguities inherent to queries. Instead of treating ambiguity as a deficiency, we reframe it as a feature of cooperative interaction, where the responsibility of query specification is shared among the user and the system. We develop a principled framework distinguishing cooperative queries, i.e., queries that yield a resolvable interpretation, from uncooperative queries that cannot be resolved. Applying the framework to evaluations for tabular question answering and analysis, we analyze the queries in 15 popular datasets, and observe an uncontrolled mixing of query types neither adequate for evaluating a system's execution accuracy nor for evaluating interpretation capabilities. Our framework and analysis of queries shifts the perspective from fixing ambiguity to embracing cooperation in resolving queries. This reflection enables more informed design and evaluation for natural language interfaces for tabular data, for which we outline implications and directions for future research.
Existing tabular reasoning benchmarks mostly test models on small, uniform tables, underrepresenting the complexity of real-world data and giving an incomplete view of Large Language Models' (LLMs) reasoning abilities. Real tables are long, heterogeneous, and domain-specific, mixing structured fields with free text and requiring multi-hop reasoning across thousands of tokens. To address this gap, we introduce RUST-BENCH, a benchmark of 7966 questions from 2031 real-world tables spanning two domains: i) RB-Science (NSF grant records) and ii) RB-Sports (NBA statistics). Unlike prior work, RUST-BENCH evaluates LLMs jointly across scale, heterogeneity, domain specificity, and reasoning complexity. Experiments with open-source and proprietary models show that LLMs struggle with heterogeneous schemas and complex multi-hop inference, revealing persistent weaknesses in current architectures and prompting strategies. RUST-BENCH establishes a challenging new testbed for advancing tabular reasoning research.
Production vector search systems often fan out each query across parallel lanes (threads, replicas, or shards) to meet latency service-level objectives (SLOs). In practice, these lanes rediscover the same candidates, so extra compute does not increase coverage. We present a coordination-free lane partitioner that turns duplication into complementary work at the same cost and deadline. For each query we (1) build a deterministic candidate pool sized to the total top-k budget, (2) apply a per-query pseudorandom permutation, and (3) assign each lane a disjoint slice of positions. Lanes then return different results by construction, with no runtime coordination. At equal cost with four lanes (total candidate budget 64), on SIFT1M (1M SIFT feature vectors) with Hierarchical Navigable Small World graphs (HNSW) recall@10 rises from 0.249 to 0.999 while lane overlap falls from nearly 100% to 0%. On MS MARCO (8.8M passages) with HNSW, hit@10 improves from 0.200 to 0.601 and Mean Reciprocal Rank at 10 (MRR@10) from 0.133 to 0.330. For inverted file (IVF) indexes we see smaller but consistent gains (for example, +11% on MS MARCO) by de-duplicating list routing. A microbenchmark shows planner overhead of ~37 microseconds per query (mean at the main setting) with linear growth in the number of merged candidates. These results yield a simple operational guideline: size the per-query pool to the total budget, deterministically partition positions across lanes, and turn redundant fan-out into complementary coverage without changing budget or deadline.
Text-to-SQL systems provide a natural language interface that can enable even laymen to access information stored in databases. However, existing Large Language Models (LLM) struggle with SQL generation from natural instructions due to large schema sizes and complex reasoning. Prior work often focuses on complex, somewhat impractical pipelines using flagship models, while smaller, efficient models remain overlooked. In this work, we explore three multi-agent LLM pipelines, with systematic performance benchmarking across a range of small to large open-source models: (1) Multi-agent discussion pipeline, where agents iteratively critique and refine SQL queries, and a judge synthesizes the final answer; (2) Planner-Coder pipeline, where a thinking model planner generates stepwise SQL generation plans and a coder synthesizes queries; and (3) Coder-Aggregator pipeline, where multiple coders independently generate SQL queries, and a reasoning agent selects the best query. Experiments on the Bird-Bench Mini-Dev set reveal that Multi-Agent discussion can improve small model performance, with up to 10.6% increase in Execution Accuracy for Qwen2.5-7b-Instruct seen after three rounds of discussion. Among the pipelines, the LLM Reasoner-Coder pipeline yields the best results, with DeepSeek-R1-32B and QwQ-32B planners boosting Gemma 3 27B IT accuracy from 52.4% to the highest score of 56.4%. Codes are available at https://github.com/treeDweller98/bappa-sql.
Generalized Deduplication (GD) enables lossless compression with direct analytics on compressed data by dividing data into \emph{bases} and \emph{deviations} and performing dictionary encoding on the former. However, GD algorithms face scalability challenges for high-dimensional data. For example, the GreedyGD algorithm relies on an iterative bit-selection process across $d$-dimensional data resulting in $O(nd^2)$ complexity for $n$ data rows to select bits to be used as bases and deviations. Although the $n$ data rows can be reduced during training at the expense of performance, highly dimensional data still experiences a marked loss in performance. This paper introduces EntroGD, an entropy-guided GD framework that reduces complexity of the bit-selection algorithm to $O(nd)$. EntroGD operates considers a two-step process. First, it generates condensed samples to preserve analytic fidelity. Second, it applies entropy-guided bit selection to maximize compression efficiency. Across 18 datasets of varying types and dimensionalities, EntroGD achieves compression performance comparable to GD-based and universal compressors, while reducing configuration time by up to 53.5$\times$ over GreedyGD and accelerating clustering by up to 31.6$\times$ over the original data with negligible accuracy loss by performing analytics on the condensed samples, which are much fewer than original samples. Thus, EntroGD provides an efficient and scalable solution to performing analytics directly on compressed data.
Domains such as IoT (Internet of Things) and HPC (High Performance Computing) generate a torrential influx of floating-point time-series data. Compressing these data while preserving their absolute fidelity is critical, and leveraging the massive parallelism of modern GPUs offers a path to unprecedented throughput. Nevertheless, designing such a high-performance GPU-based lossless compressor faces three key challenges: 1) heterogeneous data movement bottlenecks, 2) precision-preserving conversion complexity, and 3) anomaly-induced sparsity degradation. To address these challenges, this paper proposes Falcon, a GPU-based Floating-point Adaptive Lossless COmpressioN framework. Specifically, Falcon first introduces a lightweight asynchronous pipeline, which hides the I/O latency during the data transmission between the CPU and GPU. Then, we propose an accurate and fast float-to-integer transformation method with theoretical guarantees, which eliminates the errors caused by floating-point arithmetic. Moreover, we devise an adaptive sparse bit-plane lossless encoding strategy, which reduces the sparsity caused by outliers. Extensive experiments on 12 diverse datasets show that our compression ratio improves by 9.1% over the most advanced CPU-based method, with compression throughput 2.43X higher and decompression throughput 2.4X higher than the fastest GPU-based competitors, respectively.
Filtered Approximate Nearest Neighbor (ANN) search retrieves the closest vectors for a query vector from a dataset. It enforces that a specified set of discrete labels $S$ for the query must be included in the labels of each retrieved vector. Existing graph-based methods typically incorporate filter awareness by assigning fixed penalties or prioritizing nodes based on filter satisfaction. However, since these methods use fixed, data in- dependent penalties, they often fail to generalize across datasets with diverse label and vector distributions. In this work, we propose a principled alternative that learns the optimal trade-off between vector distance and filter match directly from the data, rather than relying on fixed penalties. We formulate this as a constrained linear optimization problem, deriving weights that better reflect the underlying filter distribution and more effectively address the filtered ANN search problem. These learned weights guide both the search process and index construction, leading to graph structures that more effectively capture the underlying filter distribution and filter semantics. Our experiments demonstrate that adapting the distance function to the data significantly im- proves accuracy by 5-10% over fixed-penalty methods, providing a more flexible and generalizable framework for the filtered ANN search problem.
Small, imbalanced datasets and poor input image quality can lead to high false predictions rates with deep learning models. This paper introduces Class-Based Image Composition, an approach that allows us to reformulate training inputs through a fusion of multiple images of the same class into combined visual composites, named Composite Input Images (CoImg). That enhances the intra-class variance and improves the valuable information density per training sample and increases the ability of the model to distinguish between subtle disease patterns. Our method was evaluated on the Optical Coherence Tomography Dataset for Image-Based Deep Learning Methods (OCTDL) (Kulyabin et al., 2024), which contains 2,064 high-resolution optical coherence tomography (OCT) scans of the human retina, representing seven distinct diseases with a significant class imbalance. We constructed a perfectly class-balanced version of this dataset, named Co-OCTDL, where each scan is resented as a 3x1 layout composite image. To assess the effectiveness of this new representation, we conducted a comparative analysis between the original dataset and its variant using a VGG16 model. A fair comparison was ensured by utilizing the identical model architecture and hyperparameters for all experiments. The proposed approach markedly improved diagnostic results.The enhanced Dataset achieved near-perfect accuracy (99.6%) with F1-score (0.995) and AUC (0.9996), compared to a baseline model trained on raw dataset. The false prediction rate was also significantly lower, this demonstrates that the method can producehigh-quality predictions even for weak datasets affected by class imbalance or small sample size.
Unstructured data, in the form of text, images, video, and audio, is produced at exponentially higher rates. In tandem, machine learning (ML) methods have become increasingly powerful at analyzing unstructured data. Modern ML methods can now detect objects in images, understand actions in videos, and even classify complex legal texts based on legal intent. Combined, these trends make it increasingly feasible for analysts and researchers to automatically understand the "real world." However, there are major challenges in deploying these techniques: 1) executing queries efficiently given the expense of ML methods, 2) expressing queries over bespoke forms of data, and 3) handling errors in ML methods. In this monograph, we discuss challenges and advances in data management systems for unstructured data using ML, with a particular focus on video analytics. Using ML to answer queries introduces new challenges.First, even turning user intent into queries can be challenging: it is not obvious how to express a query of the form "select instances of cars turning left." Second, ML models can be orders of magnitude more expensive compared to processing traditional structured data. Third, ML models and the methods to accelerate analytics with ML models can be error-prone. Recent work in the data management community has aimed to address all of these challenges. Users can now express queries via user-defined functions, opaquely through standard structured schemas, and even by providing examples. Given a query, recent work focuses on optimizing queries by approximating expensive "gold" methods with varying levels of guarantees. Finally, to handle errors in ML models, recent work has focused on applying outlier and drift detection to data analytics with ML.
Data provenance has numerous applications in the context of data preparation pipelines. It can be used for debugging faulty pipelines, interpreting results, verifying fairness, and identifying data quality issues, which may affect the sources feeding the pipeline execution. In this paper, we present an indexing mechanism to efficiently capture and query pipeline provenance. Our solution leverages tensors to capture fine-grained provenance of data processing operations, using minimal memory. In addition to record-level lineage relationships, we provide finer granularity at the attribute level. This is achieved by augmenting tensors, which capture retrospective provenance, with prospective provenance information, drawing connections between input and output schemas of data processing operations. We demonstrate how these two types of provenance (retrospective and prospective) can be combined to answer a broad range of provenance queries efficiently, and show effectiveness through evaluation exercises using both real and synthetic data.
Database (DB) search and clustering are fundamental in proteomics but conventional full clustering and search approaches demand high resources and incur long latency. We propose a lightweight incremental clustering and highly parallelizable DB search platform tailored for resource-constrained environments, delivering low energy and latency without compromising performance. By leveraging mass-spectrometry insights, we employ bucket-wise parallelization and query scheduling to reduce latency. A one-time hardware initialization with pre-clustered proteomics data enables continuous DB search and local re-clustering, offering a more practical and efficient alternative to clustering from scratch. Heuristics from pre-clustered data guide incremental clustering, accelerating the process by 20x with only a 0.3% increase in clustering error. DB search results overlap by 96% with state-of-the-art tools, validating search quality. The hardware leverages a 3T 2M T J SOT-CAM at the 7nm node with a compute-in-memory design. For the human genome draft dataset (131GB), setup requires 1.19mJ for 2M spectra, while a 1000 query search consumes 1.1{\mu}J. Bucket-wise parallelization further achieves 100x speedup.
Traditional ETL and ELT design patterns struggle to meet modern requirements of scalability, governance, and real-time data processing. Hybrid approaches such as ETLT (Extract-Transform-Load-Transform) and ELTL (Extract-Load-Transform-Load) are already used in practice, but the literature lacks best practices and formal recognition of these approaches as design patterns. This paper formalizes ETLT and ELTL as reusable design patterns by codifying implicit best practices and introduces enhanced variants, ETLT++ and ELTL++, to address persistent gaps in governance, quality assurance, and observability. We define ETLT and ELTL patterns systematically within a design pattern framework, outlining their structure, trade-offs, and use cases. Building on this foundation, we extend them into ETLT++ and ELTL++ by embedding explicit contracts, versioning, semantic curation, and continuous monitoring as mandatory design obligations. The proposed framework offers practitioners a structured roadmap to build auditable, scalable, and cost-efficient pipelines, unifying quality enforcement, lineage, and usability across multi-cloud and real-time contexts. By formalizing ETLT and ELTL, and enhancing them through ETLT++ and ELTL++, this work bridges the gap between ad hoc practice and systematic design, providing a reusable foundation for modern, trustworthy data engineering.
In recent years, the research of multi-agent systems has taken a direction to explore larger and more complex models to fulfill sophisticated tasks. We point out two possible pitfalls that might be caused by increasing complexity; susceptibilities to faults, and performance bottlenecks. To prevent the former threat, we propose a transaction-based framework to design very complex multi-agent systems (VCMAS). To address the second threat, we offer to integrate transaction scheduling into the proposed framework. We implemented both of these ideas to develop the OptiMA framework and show that it is able to facilitate the execution of VCMAS with more than a hundred agents. We also demonstrate the effect of transaction scheduling on such a system by showing improvements up to more than 16\%. Furthermore, we also performed a theoretical analysis on the transaction scheduling problem and provided practical tools that can be used for future research on it.
Unstructured data is pervasive, but analytical queries demand structured representations, creating a significant extraction challenge. Existing methods like RAG lack schema awareness and struggle with cross-document alignment, leading to high error rates. We propose ReDD (Relational Deep Dive), a framework that dynamically discovers query-specific schemas, populates relational tables, and ensures error-aware extraction with provable guarantees. ReDD features a two-stage pipeline: (1) Iterative Schema Discovery (ISD) identifies minimal, joinable schemas tailored to each query, and (2) Tabular Data Population (TDP) extracts and corrects data using lightweight classifiers trained on LLM hidden states. A main contribution of ReDD is SCAPE, a statistically calibrated method for error detection with coverage guarantees, and SCAPE-HYB, a hybrid approach that optimizes the trade-off between accuracy and human correction costs. Experiments across diverse datasets demonstrate ReDD's effectiveness, reducing data extraction errors from up to 30% to below 1% while maintaining high schema completeness (100% recall) and precision. ReDD's modular design enables fine-grained control over accuracy-cost trade-offs, making it a robust solution for high-stakes analytical queries over unstructured corpora.
Data lakes enable easy maintenance of heterogeneous data in its native form. While this flexibility can accelerate data ingestion, it shifts the complexity of data preparation and query processing to data discovery tasks. One such task is Table Union Search (TUS), which identifies tables that can be unioned with a given input table. In this work, we present EasyTUS, a comprehensive framework that leverages Large Language Models (LLMs) to perform efficient and scalable Table Union Search across data lakes. EasyTUS implements the search pipeline as three modular steps: Table Serialization for consistent formatting and sampling, Table Representation that utilizes LLMs to generate embeddings, and Vector Search that leverages approximate nearest neighbor indexing for semantic matching. To enable reproducible and systematic evaluation, in this paper, we also introduce TUSBench, a novel standardized benchmarking environment within the EasyTUS framework. TUSBench supports unified comparisons across approaches and data lakes, promoting transparency and progress in the field. Our experiments using TUSBench show that EasyTUS consistently outperforms most of the state-of the-art approaches, achieving improvements in average of up to 34.3% in Mean Average Precision (MAP), up to 79.2x speedup in data preparation, and up to 7.7x faster query processing performance. Furthermore, EasyTUS maintains strong performance even in metadata-absent settings, highlighting its robustness and adaptability across data lakes.
The Graph Edit Distance (GED) is an important metric for measuring the similarity between two (labeled) graphs. It is defined as the minimum cost required to convert one graph into another through a series of (elementary) edit operations. Its effectiveness in assessing the similarity of large graphs is limited by the complexity of its exact calculation, which is NP-hard theoretically and computationally challenging in practice. The latter can be mitigated by switching to the Graph Similarity Search under GED constraints, which determines whether the edit distance between two graphs is below a given threshold. A popular framework for solving Graph Similarity Search under GED constraints in a graph database for a query graph is the filter-and-verification framework. Filtering discards unpromising graphs, while the verification step certifies the similarity between the filtered graphs and the query graph. To improve the filtering step, we define a lower bound based on an integer linear programming formulation. We prove that this lower bound dominates the effective branch match-based lower bound and can also be computed efficiently. Consequently, we propose a graph similarity search algorithm that uses a hierarchy of lower bound algorithms and solves a novel integer programming formulation that exploits the threshold parameter. An extensive computational experience on a well-assessed test bed shows that our approach significantly outperforms the state-of-the-art algorithm on most of the examined thresholds.
In this paper we propose an approach to implement specific relation-ship set between two entities called combinatorial relationship set. For the combinatorial relationship set B between entity sets G and I the mapping cardinality is many-to-many. Additionally, entities from G can be uniquely encoded with a pair of values (h, k) generated with the procedure for numbering combinations of entities from I. The encoding procedure is based on combinatorial number system that provides a representation of all possible k -combinations of a set of n elements by a single number. In general many-to-many relationship sets are represented by a relation or table, while the combinatorial relationship is not physically stored as separate table. However, all information is encapsulated into a single column added to G. The new column is a candidate key in G. Additional operation named Rank-Join to fundamental relational-algebra is presented to combine information from g and i associated with a combinatorial relationship set. Motivation for combinatorial relationship originates from challenges in designing and implementing multivalued dimensions and bridge tables in data-warehouse models.
There is growing interest in deploying ML inference and knowledge retrieval as services that could support both interactive queries by end users and more demanding request flows that arise from AIs integrated into a end-user applications and deployed as agents. Our central premise is that these latter cases will bring service level latency objectives (SLOs). Existing ML serving platforms use batching to optimize for high throughput, exposing them to unpredictable tail latencies. Vortex enables an SLO-first approach. For identical tasks, Vortex's pipelines achieve significantly lower and more stable latencies than TorchServe and Ray Serve over a wide range of workloads, often enabling a given SLO target at more than twice the request rate. When RDMA is available, the Vortex advantage is even more significant.