Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
In this technical report, we present a formalisation of the MongoDB aggregation framework. Our aim is to identify a fragment that could serve as the starting point for an industry-wide standard for querying JSON document databases. We provide a syntax and formal semantics for a set of selected operators, We show how this fragment relates to known relational query languages. We explain how our semantics differs from the current implementation of MongoDB, and justify our choices. We provide a set of algebraic transformations that can be used for query optimisation.
Approximate Nearest Neighbor Search (ANNS) plays a critical role in applications such as search engines, recommender systems, and RAG for LLMs. Vector quantization (VQ), a crucial technique for ANNS, is commonly used to reduce space overhead and accelerate distance computations. However, despite significant research advances, state-of-the-art VQ methods still face challenges in balancing encoding efficiency and quantization accuracy. To address these limitations, we propose a novel VQ method called SAQ. To improve accuracy, SAQ employs a new dimension segmentation technique to strategically partition PCA-projected vectors into segments along their dimensions. By prioritizing leading dimension segments with larger magnitudes, SAQ allocates more bits to high-impact segments, optimizing the use of the available space quota. An efficient dynamic programming algorithm is developed to optimize dimension segmentation and bit allocation, ensuring minimal quantization error. To speed up vector encoding, SAQ devises a code adjustment technique to first quantize each dimension independently and then progressively refine quantized vectors using a coordinate-descent-like approach to avoid exhaustive enumeration. Extensive experiments demonstrate SAQ's superiority over classical methods (e.g., PQ, PCA) and recent state-of-the-art approaches (e.g., LVQ, Extended RabitQ). SAQ achieves up to 80% reduction in quantization error and accelerates encoding speed by over 80x compared to Extended RabitQ.
When query evaluation produces too many tuples, a new approach in query answering is to retrieve a diverse subset of them. The standard approach for measuring the diversity of a set of tuples is to use a distance function between tuples, which measures the dissimilarity between them, to then aggregate the pairwise distances of the set into a score (e.g., by using sum or min aggregation). However, as we will point out in this work, the resulting diversity measures may display some unintuitive behavior. Moreover, even in very simple settings, finding a maximally diverse subset of the answers of fixed size is, in general, intractable and little is known about approximations apart from some hand-picked distance-aggregator pairs. In this work, we introduce a novel approach for computing the diversity of tuples based on volume instead of distance. We present a framework for defining volume-based diversity functions and provide several examples of these measures applied to relational data. Although query answering of conjunctive queries (CQ) under this setting is intractable in general, we show that one can always compute a (1-1/e)-approximation for any volume-based diversity function. Furthermore, in terms of combined complexity, we connect the evaluation of CQs under volume-based diversity functions with the ranked enumeration of solutions, finding general conditions under which a (1-1/e)-approximation can be computed in polynomial time.
While extensive research on query evaluation has achieved consistent improvements in the time complexity of algorithms, the space complexity of query evaluation has been largely ignored. This is a particular challenge in settings with strict pre-defined space constraints. In this paper, we examine the combined space-time complexity of conjunctive queries (CQs) and, more generally, of sum-product queries (SPQs). We propose several classes of space-efficient algorithms for evaluating SPQs, and we show that the optimal time complexity is almost always achievable with asymptotically lower space complexity than traditional approaches.
We present ORQ, a system that enables collaborative analysis of large private datasets using cryptographically secure multi-party computation (MPC). ORQ protects data against semi-honest or malicious parties and can efficiently evaluate relational queries with multi-way joins and aggregations that have been considered notoriously expensive under MPC. To do so, ORQ eliminates the quadratic cost of secure joins by leveraging the fact that, in practice, the structure of many real queries allows us to join records and apply the aggregations "on the fly" while keeping the result size bounded. On the system side, ORQ contributes generic oblivious operators, a data-parallel vectorized query engine, a communication layer that amortizes MPC network costs, and a dataflow API for expressing relational analytics -- all built from the ground up. We evaluate ORQ in LAN and WAN deployments on a diverse set of workloads, including complex queries with multiple joins and custom aggregations. When compared to state-of-the-art solutions, ORQ significantly reduces MPC execution times and can process one order of magnitude larger datasets. For our most challenging workload, the full TPC-H benchmark, we report results entirely under MPC with Scale Factor 10 -- a scale that had previously been achieved only with information leakage or the use of trusted third parties.
High read and write performance is important for generic key/value stores, which are fundamental to modern applications and databases. Yet, achieving high performance for both reads and writes is challenging due to traditionally limited memory and the pick-any-two-out-of-three tradeoff between memory use, read performance, and write performance. Existing state-of-the-art approaches limit memory usage and chose a primary dimension (reads or writes) for which to optimize their on-disk structures. They recover performance in the remaining dimension by other mechanisms. This approach limits databases' maximum performance in the remaining dimension and their dynamic (online) tunability to respond to changing workloads. We explore a different approach that dynamically trades memory for read or write performance as needed. We present TurtleKV, which includes a novel unbiased data structure for on-disk storage. It includes a knob that dynamically increases memory reserved for increasing read or write performance. When evaluated on YCSB, TurtleKV achieves up to 8x the write throughput of industry-leader RocksDB and up to 5x the read throughput while incurring similar space amplification. Compared to the state-of-the-art system SplinterDB, TurtleKV runs up to 40% better on point queries, up to 6x better on range scans and achieves similar write performance, while incurring 50% less space amplification.
We consider conjunctive queries with arithmetic comparisons (CQAC) and investigate the computational complexity of the problem: Given two CQAC queries, $Q$ and $Q'$, is $Q'$ contained in $Q$? We know that, for CQAC queries, the problem of testing containment is $\Pi_2 ^p$ -complete. However, there are broad classes of queries with semi-interval arithmetic comparisons in the containing query that render the problem solvable in NP. In all cases examined the contained query is allowed to be any CQAC. Interestingly, we also prove that there are simple cases where the problem remains $\Pi_2 ^p$ -complete. We also investigate the complexity of computing certain answers in the framework of answering CQAC queries with semi-interval comparisons using any CQAC views. We prove that maximally contained rewritings in the language of union of CQACs always compute exactly all certain answers. We find cases where we can compute certain answers in polynomial time using maximally contained rewritings.
Given a conjunctive query and a database instance, we aim to develop an index that can efficiently answer spatial queries on the results of a conjunctive query. We are interested in some commonly used spatial queries, such as range emptiness, range count, and nearest neighbor queries. These queries have essential applications in data analytics, such as filtering relational data based on attribute ranges and temporal graph analysis for counting graph structures like stars, paths, and cliques. Furthermore, this line of research can accelerate relational algorithms that incorporate spatial queries in their workflow, such as relational clustering. Known approaches either have to spend $\tilde{O}(N)$ query time or use space as large as the number of query results, which are inefficient or unrealistic to employ in practice. Hence, we aim to construct an index that answers spatial conjunctive queries in both time- and space-efficient ways. In this paper, we establish lower bounds on the tradeoff between answering time and space usage. For $k$-star (resp. $k$-path) queries, we show that any index for range emptiness, range counting or nearest neighbor queries with $T$ answering time requires $\Omega\left(N+\frac{N^k}{T^k}\right)$ (resp. $\Omega\left(N+\frac{N^2}{T^{2/(k-1)}}\right)$) space. Then, we construct optimal indexes for answering range emptiness and range counting problems over $k$-star and $k$-path queries. Extending this result, we build an index for hierarchical queries. By resorting to the generalized hypertree decomposition, we can extend our index to arbitrary conjunctive queries for supporting spatial conjunctive queries. Finally, we show how our new indexes can be used to improve the running time of known algorithms in the relational setting.
DBOS (DataBase Operating System) is a novel capability that integrates web services, operating system functions, and database features to significantly reduce web-deployment effort while increasing resilience. Integration of high performance network sensing enables DBOS web services to collaboratively create a shared awareness of their network environments to enhance their collective resilience and security. Network sensing is added to DBOS using GraphBLAS hypersparse traffic matrices via two approaches: (1) Python-GraphBLAS and (2) OneSparse PostgreSQL. These capabilities are demonstrated using the workflow and analytics from the IEEE/MIT/Amazon Anonymized Network Sensing Graph Challenge. The system was parallelized using pPython and benchmarked using 64 compute nodes on the MIT SuperCloud. The web request rate sustained by a single DBOS instance was ${>}10^5$, well above the required maximum, indicating that network sensing can be added to DBOS with negligible overhead. For collaborative awareness, many DBOS instances were connected to a single DBOS aggregator. The Python-GraphBLAS and OneSparse PostgreSQL implementations scaled linearly up to 64 and 32 nodes respectively. These results suggest that DBOS collaborative network awareness can be achieved with a negligible increase in computing resources.
Setchain has been proposed to increase blockchain scalability by relaxing the strict total order requirement among transactions. Setchain organizes elements into a sequence of sets, referred to as epochs, so that elements within each epoch are unordered. In this paper, we propose and evaluate three distinct Setchain algorithms, that leverage an underlying block-based ledger. Vanilla is a basic implementation that serves as a reference point. Compresschain aggregates elements into batches, and compresses these batches before appending them as epochs in the ledger. Hashchain converts batches into fixed-length hashes which are appended as epochs in the ledger. This requires Hashchain to use a distributed service to obtain the batch contents from its hash. To allow light clients to safely interact with only one server, the proposed algorithms maintain, as part of the Setchain, proofs for the epochs. An epoch-proof is the hash of the epoch, cryptographically signed by a server. A client can verify the correctness of an epoch with $f+1$ epoch-proofs (where $f$ is the maximum number of Byzantine servers assumed). All three Setchain algorithms are implemented on top of the CometBFT blockchain application platform. We conducted performance evaluations across various configurations, using clusters of four, seven, and ten servers. Our results show that the Setchain algorithms reach orders of magnitude higher throughput than the underlying blockchain, and achieve finality with latency below 4 seconds.
Reliable data quality is crucial for downstream analysis of tabular datasets, yet rule-based validation often struggles with inefficiency, human intervention, and high computational costs. We present a three-stage framework that combines statistical inliner detection with LLM-driven rule and code generation. After filtering data samples through traditional clustering, we iteratively prompt LLMs to produce semantically valid quality rules and synthesize their executable validators through code-generating LLMs. To generate reliable quality rules, we aid LLMs with retrieval-augmented generation (RAG) by leveraging external knowledge sources and domain-specific few-shot examples. Robust guardrails ensure the accuracy and consistency of both rules and code snippets. Extensive evaluations on benchmark datasets confirm the effectiveness of our approach.
In recent years, there has been significant progress in the development of deep learning models over relational databases, including architectures based on heterogeneous graph neural networks (hetero-GNNs) and heterogeneous graph transformers. In effect, such architectures state how the database records and links (e.g., foreign-key references) translate into a large, complex numerical expression, involving numerous learnable parameters. This complexity makes it hard to explain, in human-understandable terms, how a model uses the available data to arrive at a given prediction. We present a novel framework for explaining machine-learning models over relational databases, where explanations are view definitions that highlight focused parts of the database that mostly contribute to the model's prediction. We establish such global abductive explanations by adapting the classic notion of determinacy by Nash, Segoufin, and Vianu (2010). In addition to tuning the tradeoff between determinacy and conciseness, the framework allows controlling the level of granularity by adopting different fragments of view definitions, such as ones highlighting whole columns, foreign keys between tables, relevant groups of tuples, and so on. We investigate the realization of the framework in the case of hetero-GNNs. We develop heuristic algorithms that avoid the exhaustive search over the space of all databases. We propose techniques that are model-agnostic, and others that are tailored to hetero-GNNs via the notion of learnable masking. Our approach is evaluated through an extensive empirical study on the RelBench collection, covering a variety of domains and different record-level tasks. The results demonstrate the usefulness of the proposed explanations, as well as the efficiency of their generation.
To obtain insights from event data, advanced process mining methods assess the similarity of activities to incorporate their semantic relations into the analysis. Here, distributional similarity that captures similarity from activity co-occurrences is commonly employed. However, existing work for distributional similarity in process mining adopt neural network-based approaches as developed for natural language processing, e.g., word2vec and autoencoders. While these approaches have been shown to be effective, their downsides are high computational costs and limited interpretability of the learned representations. In this work, we argue for simplicity in the modeling of distributional similarity of activities. We introduce count-based embeddings that avoid a complex training process and offer a direct interpretable representation. To underpin our call for simple embeddings, we contribute a comprehensive benchmarking framework, which includes means to assess the intrinsic quality of embeddings, their performance in downstream applications, and their computational efficiency. In experiments that compare against the state of the art, we demonstrate that count-based embeddings provide a highly effective and efficient basis for distributional similarity between activities in event data.
Knowledge graph construction has become an essential domain for the future of biomedical research. But current approaches demand a high amount of redundant labor. These redundancies are the result of the lack of data standards and "knowledge-graph ready" data from sources. Using the KGX standard, we aim to solve these issues. Herein we introduce Koza and the Koza-Hub, a Python software package which streamlines ingesting raw biomedical information into the KGX format, and an associated set of conversion processes for thirty gold standard biomedical data sources. Our approach is to turn knowledge graph ingests into a set of primitive operations, provide configuration through YAML files, and enforce compliance with the chosen data schema.
Distributed systems require robust, scalable identifier schemes to ensure data uniqueness and efficient indexing across multiple nodes. This paper presents a comprehensive analysis of the evolution of distributed identifiers, comparing traditional auto-increment keys with UUIDv4, UUIDv7, and ULIDs. We combine mathematical calculation of collision probabilities with empirical experiments measuring generation speed and network transmission overhead in a simulated distributed environment. Results demonstrate that ULIDs significantly outperform UUIDv4 and UUIDv7, reducing network overhead by 83.7% and increasing generation speed by 97.32%. statistical analysis further shows ULIDs offer a 98.42% lower collision risk compared to UUIDv7, while maintaining negligible collision probabilities even at high generation rates. These findings highlight ULIDs as an optimal choice for high-performance distributed systems, providing efficient, time-ordered, and lexicographically sortable identifiers suitable for scalable applications. All source code, datasets, and analysis scripts utilized in this research are publicly available in our dedicated repository at https://github.com/nimakarimiank/uids-comparison. This repository contains comprehensive documentation of the experimental setup, including configuration files for the distributed environment, producer and consumer implementations, and message broker integration. Additionally, it provides the data scripts and datasets. Researchers and practitioners are encouraged to explore the repository for full reproducibility of the experiments and to facilitate further investigation or extension of the presented work.
Cardinality estimation is an important part of query optimization in DBMS. We develop a Quantum Cardinality Estimation (QCardEst) approach using Quantum Machine Learning with a Hybrid Quantum-Classical Network. We define a compact encoding for turning SQL queries into a quantum state, which requires only qubits equal to the number of tables in the query. This allows the processing of a complete query with a single variational quantum circuit (VQC) on current hardware. In addition, we compare multiple classical post-processing layers to turn the probability vector output of VQC into a cardinality value. We introduce Quantum Cardinality Correction QCardCorr, which improves classical cardinality estimators by multiplying the output with a factor generated by a VQC to improve the cardinality estimation. With QCardCorr, we have an improvement over the standard PostgreSQL optimizer of 6.37 times for JOB-light and 8.66 times for STATS. For JOB-light we even outperform MSCN by a factor of 3.47.
Organizations use data lakes to store and analyze sensitive data. But hackers may compromise data lake storage to bypass access controls and access sensitive data. To address this, we propose Membrane, a system that (1) cryptographically enforces data-dependent access control views over a data lake, (2) without restricting the analytical queries data scientists can run. We observe that data lakes, unlike DBMSes, disaggregate computation and storage into separate trust domains, making at-rest encryption sufficient to defend against remote attackers targeting data lake storage, even when running analytical queries in plaintext. This leads to a new system design for Membrane that combines encryption at rest with SQL-aware encryption. Using block ciphers, a fast symmetric-key primitive with hardware acceleration in CPUs, we develop a new SQL-aware encryption protocol well-suited to at-rest encryption. Membrane adds overhead only at the start of an interactive session due to decrypting views, delaying the first query result by up to $\approx 20\times$; subsequent queries process decrypted data in plaintext, resulting in low amortized overhead.
SQL queries in real world analytical environments, whether written by humans or generated automatically often suffer from syntax errors, inefficiency, or semantic misalignment, especially in complex OLAP scenarios. To address these challenges, we propose SQLGovernor, an LLM powered SQL toolkit that unifies multiple functionalities, including syntax correction, query rewriting, query modification, and consistency verification within a structured framework enhanced by knowledge management. SQLGovernor introduces a fragment wise processing strategy to enable fine grained rewriting and localized error correction, significantly reducing the cognitive load on the LLM. It further incorporates a hybrid self learning mechanism guided by expert feedback, allowing the system to continuously improve through DBMS output analysis and rule validation. Experiments on benchmarks such as BIRD and BIRD CRITIC, as well as industrial datasets, show that SQLGovernor consistently boosts the performance of base models by up to 10%, while minimizing reliance on manual expertise. Deployed in production environments, SQLGovernor demonstrates strong practical utility and effective performance.