Learning Temporal Properties for Explainability and Verification
- The past decade has witnessed the rise of several intelligent systems fueled by the large-scale adoption of data-driven techniques. However, due to the inherent black-box nature of data-driven techniques, intelligent systems tend to have relatively complex designs. At times, even the designers of these systems struggle to comprehend them fully. Consequently, it is not surprising that the failures of intelligent systems are commonplace. To deploy intelligent systems for widespread use, it is crucial for humans to place significant trust in them. Towards the goal of developing trust, in this thesis, we focus on techniques that are targeted towards (i) explaining the behavior of systems in a human-interpretable manner, and (ii) facilitating their verification through formal methods. Our primary intent is to explain and formalize the temporal behavior of systems. To represent temporal behavior, we exploit formalisms that originate from formal language theory, such as temporal logic and finite automata. Such formalisms are considered to be easy-to-understand and, at the same time, are formal enough to unambiguously express temporal behavior. Consequently, both formalisms have had numerous applications: as specifications for formal verification, as interpretable descriptions for Explainable AI, etc. Our approach to formalizing temporal behavior involves the automated learning of temporal properties, considering system executions as input data. By varying different aspects of the input data and the formalism used for expressing temporal properties, we formulate several learning problems suited to diverse practical scenarios. We consider learning in the presence of noise, from only positive data, etc. Moreover, we consider well-known temporal logics such as Linear Temporal Logic (LTL), Metric Temporal Logic (MTL), Property Specification Language (PSL), etc. and finite automata such as Deterministic Finite Automata (DFAs). We study various aspects of the considered learning problems. Most importantly, we design efficient algorithms to tackle the learning problems. Our algorithms rely on popular techniques such as satisfiability problems, combinatorial search, decision-tree learning, etc. A crucial feature of our algorithms is that they have guarantees (e.g., termination, soundness, completeness, etc.) that are validated through formal proofs. Moreover, we investigate associated decision problems and present complexity results, providing further insights into the learning problems. We implement all algorithms in user-friendly and open-source tools and test them on a wide range of synthetic and real-world benchmarks. Through our experiments, we successfully learned meaningful temporal properties in a number of scenarios.
Author: | Rajarshi RoyORCiD |
---|---|
URN: | urn:nbn:de:hbz:386-kluedo-84250 |
DOI: | https://doi.org/10.26204/KLUEDO/8425 |
Advisor: | Daniel NeiderORCiD, Rupak MajumdarORCiD |
Document Type: | Doctoral Thesis |
Cumulative document: | No |
Language of publication: | English |
Date of Publication (online): | 2024/10/15 |
Date of first Publication: | 2024/10/15 |
Publishing Institution: | Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau |
Granting Institution: | Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau |
Acceptance Date of the Thesis: | 2024/09/13 |
Date of the Publication (Server): | 2024/10/16 |
Tag: | Explainable AI; Formal Verification; Temporal Logic |
Page Number: | VIII, 166 |
Faculties / Organisational entities: | Kaiserslautern - Fachbereich Informatik |
CCS-Classification (computer science): | D. Software |
DDC-Cassification: | 0 Allgemeines, Informatik, Informationswissenschaft / 004 Informatik |
Licence (German): | Creative Commons 4.0 - Namensnennung (CC BY 4.0) |