Loading...
Loading...
Browse, search and filter the latest cybersecurity research papers from arXiv
The recently introduced dependent typed higher-order logic (DHOL) offers an interesting compromise between expressiveness and automation support. It sacrifices the decidability of its type system in order to significantly extend its expressiveness over standard HOL. Yet it retains strong automated theorem proving support via a sound and complete translation to HOL. We leverage this design to extend DHOL with refinement and quotient types. Both of these are commonly requested by practitioners but rarely provided by automated theorem provers. This is because they inherently require undecidable typing and thus are very difficult to retrofit to decidable type systems. But with DHOL already doing the heavy lifting, adding them is not only possible but elegant and simple. Concretely, we add refinement and quotient types as special cases of subtyping. This turns the associated canonical inclusion resp. projection maps into identity maps and thus avoids costly changes in representation. We present the syntax, semantics, and translation to HOL for the extended language, including the proofs of soundness and completeness.
Intuitionistic conditional logic, studied by Weiss, Ciardelli and Liu, and Olkhovikov, aims at providing a constructive analysis of conditional reasoning. In this framework, the would and the might conditional operators are no longer interdefinable. The intuitionistic conditional logics considered in the literature are defined by setting Chellas' conditional logic CK, whose semantics is defined using selection functions, within the constructive and intuitionistic framework introduced for intuitionistic modal logics. This operation gives rise to a constructive and an intuitionistic variant of (might-free-) CK, which we call CCKbox and IntCK respectively. Building on the proof systems defined for CK and for intuitionistic modal logics, in this paper we introduce a nested calculus for IntCK and a sequent calculus for CCKbox. Based on the sequent calculus, we define CCK, a conservative extension of Weiss' logic CCKbox with the might operator. We introduce a class of models and an axiomatization for CCK, and extend these result to several extensions of CCK.
This paper enriches preexisting satisfiability tests for unquantified languages, which in turn augment a fragment of Tarski's elementary algebra with unary real functions possessing a continuous first derivative. Two sorts of individual variables are available, one ranging over real numbers and the other one ranging over the functions of interest. Numerical terms are built from real variables through constructs designating the four basic arithmetic operations and through the function-application constructs $f(t)$ and $D[\,f\,](t)$, where $f$ stands for a function variable, $t$ for a numerical term, and $D[\,\sqdot\,]$ designates the differentiation operator. Comparison relators can be placed between numerical terms. An array of predicate symbols are also available, designating various relationships between functions, as well as function properties, that may hold over intervals of the real line; those are: (pointwise) function comparisons, strict and nonstrict monotonicity~/~convexity~/~concavity properties, comparisons between the derivative of a function and a real term--here, w.r.t.\ earlier research, they are extended to (semi)-open intervals. The decision method we propose consists in preprocessing the given formula into an equisatisfiable quantifier-free formula of the elementary algebra of real numbers, whose satisfiability can then be checked by means of Tarski's decision method. No direct reference to functions will appear in the target formula, each function variable having been superseded by a collection of stub real variables; hence, in order to prove that the proposed translation is satisfiability-preserving, we must figure out a sufficiently flexible family of interpolating $C^1$ functions that can accommodate a model for the source formula whenever the target formula turns out to be satisfiable.
The Princess Marijke lock complex is a large lock and water-protection installation in the Netherlands between the river Rhine and the Amsterdam-Rijnkanaal -- a large waterway connecting the Rhine to the port of Amsterdam. The lock complex consists of two independent locks and a moveable flood-protection barrier. Ensuring safe control of the lock complex is of utmost importance to guarantee both flood-protection and reliable ship operations. This paper gives a precise, formal description of the software control of the lock complex in less than 400 lines of mCRL2 code. This description can act as a blueprint on how the software of this lock complex needs to be constructed. Moreover, using model checking, 53 software requirements are shown to be valid, ensuring that the formal description of the behaviour is correct with regard to these properties and is unlikely to contain mistakes and oversights.
Learned Differentiable Boolean Logic Networks (DBNs) already deliver efficient inference on resource-constrained hardware. We extend them with a trainable, differentiable interconnect whose parameter count remains constant as input width grows, allowing DBNs to scale to far wider layers than earlier learnable-interconnect designs while preserving their advantageous accuracy. To further reduce model size, we propose two complementary pruning stages: an SAT-based logic equivalence pass that removes redundant gates without affecting performance, and a similarity-based, data-driven pass that outperforms a magnitude-style greedy baseline and offers a superior compression-accuracy trade-off.
The belief revision field is opulent in new proposals and indigent in analyses of existing approaches. Much work hinge on postulates, employed as syntactic characterizations: some revision mechanism is equivalent to some properties. Postulates constraint specific revision instances: certain revisions update certain beliefs in a certain way. As an example, if the revision is consistent with the current beliefs, it is incorporated with no other change. A postulate like this tells what revisions must do and neglect what they can do. Can they reach a certain state of beliefs? Can they reach all possible states of beliefs? Can they reach all possible states of beliefs from no previous belief? Can they reach a dogmatic state of beliefs, where everything not believed is impossible? Can they make two conditions equally believed? An application where every possible state of beliefs is sensible requires each state of beliefs to be reachable. An application where conditions may be equally believed requires such a belief state to be reachable. An application where beliefs may become dogmatic requires a way to make them dogmatic. Such doxastic states need to be reached in a way or another. Not in specific way, as dictated by a typical belief revision postulate. This is an ability, not a constraint: the ability of being plastic, equating, dogmatic. Amnesic, correcting, believer, damascan, learnable are other abilities. Each revision mechanism owns some of these abilities and lacks the others: lexicographic, natural, restrained, very radical, full meet, radical, severe, moderate severe, deep severe, plain severe and deep severe revisions, each of these revisions is proved to possess certain abilities.
We propose LeanLTL, a unifying framework for linear temporal logics in Lean 4. LeanLTL supports reasoning about traces that represent either infinite or finite linear time. The library allows traditional LTL syntax to be combined with arbitrary Lean expressions, making it straightforward to define properties involving numerical or other types. We prove that standard flavors of LTL can be embedded in our framework. The library also provides automation for reasoning about LeanLTL formulas in a way that facilitates using Lean's existing tactics. Finally, we provide examples illustrating the utility of the library in reasoning about systems that come from applications.
We consider interpolation from the viewpoint of fully automated theorem proving in first-order logic as a general core technique for mechanized knowledge processing. For Craig interpolation, our focus is on the two-stage approach, where first an essentially propositional ground interpolant is calculated that is then lifted to a quantified first-order formula. We discuss two possibilities to obtain a ground interpolant from a proof, with clausal tableaux, and with resolution. Established preprocessing techniques for first-order proving can also be applied for Craig interpolation if they are restricted in specific ways. Equality encodings from automated reasoning justify strengthened variations of Craig interpolation. Also further contributions to Craig interpolation emerged from automated reasoning. As an approach to uniform interpolation we introduce second-order quantifier elimination with examples and describe the basic algorithms DLS and SCAN.
The CFI-graphs, named after Cai, F\"urer, and Immerman, are central to the study of the graph isomorphism testing and of first-order logic with counting. They are colored graphs, and the coloring plays a role in many of their applications. As usual, it is not hard to remove the coloring by some extra graph gadgets, but at the cost of blowing up the size of the graphs and changing some parameters of them as well. This might lead to suboptimal combinatorial bounds important to their applications. Since then for some uncolored variants of the CFI-graphs it has been shown that they serve the same purposes. We show that this already applies to the graphs obtained from the original CFI-graphs by forgetting the colors. Moreover, we will see that there is a first-order formula $\varphi(x,y)$ expressing in almost all uncolored CFI-graphs that $x$ and $y$ have the same color in the corresponding colored graphs.
SAT sweeping has long been a cornerstone technique in logic simplification and equivalence checking at the bit level, leveraging structural hashing, simulation and SAT solving to prune redundant logic. However, with the growing adoption of word-level constructs in hardware verification, such as bit-vector operations, arithmetics and arrays, there lacks a counterpart of SAT sweeping at the word level. In this paper, we introduce SMT-Sweep, a novel extension of SAT sweeping into the word level, grounded in Satisfiability Modulo Theories (SMT). SMT-Sweep takes advantage of simulation and equivalence detection to handle SMT terms with rich bit-vector operations and array semantics. Our framework incorporates both randomized and constraint-driven word-level simulation tailored to symbolic expressions and operator semantics beyond pure Boolean logic. Experimental results show that SMT-Sweep achieves significant speed-up compared to state-of-the-art bit-level SAT sweeping and word-level monolithic SMT solving (averaging around 44x and 69x, respectively).To the best of our knowledge, this is the first work that brings sweeping techniques to SMT-based hardware verification. The implementation is open-sourced at: https://github.com/yangziyiiii/SMT-Sweep.
In this paper, we advance local search for Satisfiability Modulo the Theory of Nonlinear Real Arithmetic (SMT-NRA for short). First, we introduce a two-dimensional cell-jump move, called \emph{$2d$-cell-jump}, generalizing the key operation, cell-jump, of the local search method for SMT-NRA. Then, we propose an extended local search framework, named \emph{$2d$-LS} (following the local search framework, LS, for SMT-NRA), integrating the model constructing satisfiability calculus (MCSAT) framework to improve search efficiency. To further improve the efficiency of MCSAT, we implement a recently proposed technique called \emph{sample-cell projection operator} for MCSAT, which is well suited for CDCL-style search in the real domain and helps guide the search away from conflicting states. Finally, we design a hybrid framework for SMT-NRA combining MCSAT, $2d$-LS and OpenCAD, to improve search efficiency through information exchange. The experimental results demonstrate improvements in local search performance, highlighting the effectiveness of the proposed methods.
This paper investigates the expressive power of a minimal fragment of separation logic extended with natural numbers. Specifically, it demonstrates that the fragment consisting solely of the intuitionistic points-to predicate, the constant 0, and the successor function is sufficient to encode all $\Pi^0_1$ formulas of Peano Arithmetic (PA). The authors construct a translation from PA into this fragment, showing that a $\Pi^0_1$ formula is valid in the standard model of arithmetic if and only if its translation is valid in the standard interpretation of the separation logic fragment. This result implies the undecidability of validity in the fragment, despite its syntactic simplicity. The translation leverages a heap-based encoding of arithmetic operations - addition, multiplication, and inequality - using structured memory cells. The paper also explores the boundaries of this encoding, showing that the translation does not preserve validity for $\Sigma^0_1$ formulas. Additionally, an alternative undecidability proof is presented via a reduction from finite model theory. Finally, the paper establishes that the validity problem for this fragment is $\Pi^0_1$-complete, highlighting its theoretical significance in the landscape of logic and program verification.
We present a different proof of the insecurity problem for XOR, solved in by Chevalier, Kuesters, Rusinowitch and Turuani (2005). Our proof uses the notion of typed terms and well-typed proofs, and removes a restriction on the class of protocols to which the [CKRT05] proof applies, by introducing a slightly different (but very natural) notion of protocols, where honest agent sends are derivable from previous receives in the same session.
This paper introduces AFDL, a logic-based framework for reasoning about safety, security, and defense interactions in Attack-Fault-Defense Trees, which is a model that captures all safety, security, and defense domains in a single framework. We showcase both AFDL and propose a structured domain specific query language, LangAFDL, which enables domain experts to express complex analysis goals through intuitive templates. LangAFDL supports both Boolean and quantified queries as well as minimal cut set analysis, capturing the interplay between safety, security, and defensive measures. We illustrate the expressiveness and utility of the approach through representative queries over two different real-world case studies: Gridshield and Ground Segment as a Service. The formalization lays the automated safety-security groundwork for analyses in mission-critical systems and paves the way for future tool development and integration into design workflows.
We introduce BayesL, a novel logical framework for specifying, querying, and verifying the behaviour of Bayesian networks (BNs). BayesL (pronounced "Basil") is a structured language that allows for the creation of queries over BNs. It facilitates versatile reasoning concerning causal and evidence-based relationships, and permits comprehensive what-if scenario evaluations without the need for manual modifications to the model.
We give a quantifier elimination procedure for one-parametric Presburger arithmetic, the extension of Presburger arithmetic with the function $x \mapsto t \cdot x$, where $t$ is a fixed free variable ranging over the integers. This resolves an open problem proposed in [Bogart et al., Discrete Analysis, 2017]. As conjectured in [Goodrick, Arch. Math. Logic, 2018], quantifier elimination is obtained for the extended structure featuring all integer division functions $x \mapsto \lfloor{\frac{x}{f(t)}}\rfloor$, one for each integer polynomial $f$. Our algorithm works by iteratively eliminating blocks of existential quantifiers. The elimination of a block builds on two sub-procedures, both running in non-deterministic polynomial time. The first one is an adaptation of a recently developed and efficient quantifier elimination procedure for Presburger arithmetic, modified to handle formulae with coefficients over the ring $\mathbb{Z}[t]$ of univariate polynomials. The second is reminiscent of the so-called "base $t$ division method" used by Bogart et al. As a result, we deduce that the satisfiability problem for the existential fragment of one-parametric Presburger arithmetic (which encompasses a broad class of non-linear integer programs) is in NP, and that the smallest solution to a satisfiable formula in this fragment is of polynomial bit size.
Large Language Models (LLMs) have rapidly transformed the landscape of artificial intelligence, enabling natural language interfaces and dynamic orchestration of software components. However, their reliance on probabilistic inference limits their effectiveness in domains requiring strict logical reasoning, discrete decision-making, and robust interpretability. This paper investigates these limitations and proposes a neurosymbolic approach that augments LLMs with logic-based reasoning modules, particularly leveraging Prolog predicates and composable toolsets. By integrating first-order logic and explicit rule systems, our framework enables LLMs to decompose complex queries into verifiable sub-tasks, orchestrate reliable solutions, and mitigate common failure modes such as hallucination and incorrect step decomposition. We demonstrate the practical benefits of this hybrid architecture through experiments on the DABStep benchmark, showing improved precision, coverage, and system documentation in multi-step reasoning tasks. Our results indicate that combining LLMs with modular logic reasoning restores engineering rigor, enhances system reliability, and offers a scalable path toward trustworthy, interpretable AI agents across complex domains.
In this paper, we provide a uniform framework for investigating small circuit classes and bounds through the lens of ordinary differential equations (ODEs). Following an approach recently introduced to capture the class of polynomial-time computable functions via ODE-based recursion schemas and later applied to the context of functions computed by unbounded fan-in circuits of constant depth (FAC^0), we study multiple relevant small circuit classes. In particular, we show that natural restrictions on linearity and derivation along functions with specific growth rate correspond to kinds of functions that can be proved to be in various classes, ranging from FAC^0 to FAC^1. This reveals an intriguing link between constraints over linear-length ODEs and circuit computation, providing new tools to tackle the complex challenge of establishing bounds for classes in the circuit hierarchies and possibly enhancing our understanding of the role of counters in this setting. Additionally, we establish several completeness results, in particular obtaining the first ODE-based characterizations for the classes of functions computable in constant depth with unbounded fan-in and Mod 2 gates (FACC[2]) and in logarithmic depth with bounded fan-in Boolean gates (FNC1).