Evaluating Massively Parallel Algorithms for DFA Minimisation, Equivalence Checking and Inclusion Checking
Abstract
We study parallel algorithms for the minimisation and equivalence checking of Deterministic Finite Automata (DFAs). Regarding DFA minimisation, we implement four different massively parallel algorithms on Graphics Processing Units~(GPUs). Our results confirm the expectations that the algorithm with the theoretically best time complexity is not practically suitable to run on GPUs due to the large amount of resources needed. We empirically verify that parallel partition refinement algorithms from the literature perform better in practice, even though their time complexity is worse. Furthermore, we introduce a novel algorithm based on partition refinement with an extra parallel partial transitive closure step and show that on specific benchmarks it has better run-time complexity and performs better in practice. In addition, we address checking the language equivalence and inclusion of two DFAs. We consider the Hopcroft-Karp algorithm, and explain how a variant of it can be parallelised for GPUs. We note that these problems can be encoded for the GPU-accelerated model checker \GPUexplore, allowing the use its lockless hash table and fine-grained parallel work distribution mechanism.