Skip to content
Snippets Groups Projects
Commit 2d446842 authored by Hans-Jörg's avatar Hans-Jörg
Browse files

Start restructuring differentiating between language and rules

parent bb902611
No related branches found
No related tags found
No related merge requests found
Pipeline #4460 passed
\subsection*{Unreleased} \subsection*{Unreleased}
Apply major changes to the structure of the document to clarify the difference
between the \emph{language} and the \emph{rules}.
List of rules: List of rules:
\begin{itemize} \begin{itemize}
\item Improve description of $\proofRule{sko\_ex}$. \item Improve description of $\proofRule{sko\_ex}$.
......
...@@ -302,11 +302,13 @@ break ...@@ -302,11 +302,13 @@ break
\begin{abstract} \begin{abstract}
\section*{Foreword} \section*{Foreword}
This document is a speculative specification and reference of a proof This document is a speculative specification and reference of a proof
format for SMT solvers. The basic elements of the format should not be format for SMT solvers. The format consists of a language to express
surprising to anyone: a natural-deduction style calculus driven mostly proofs and a set of proof rules. On the one side, the language is
by resolution and a text format based on the widely used SMT-LIB format. inspired by natural-deduction and is based on the widely used SMT-LIB
Furthermore, the format also includes a flexible mechanism to reason about format. The language also includes a flexible mechanism to reason
bound variables which allows fine-grained preprocessing proofs. about bound variables which allows fine-grained preprocessing proofs.
On the other side, the rules are structured around resolution and the
introduction of theory lemmas.
The specification is speculative in the sense that it is not yet The specification is speculative in the sense that it is not yet
cast in stone, but it will evolve over time. It emerged from a list cast in stone, but it will evolve over time. It emerged from a list
...@@ -344,12 +346,15 @@ get in touch! ...@@ -344,12 +346,15 @@ get in touch!
This document is a reference of the Alethe format. The format This document is a reference of the Alethe format. The format
is designed to be a flexible format to represent unsatisfiability is designed to be a flexible format to represent unsatisfiability
proofs generated by SMT solvers. It is designed around a few core proofs generated by SMT solvers. The overall design follows
a few core
concepts, such as a natural-deduction style structure and resolution. concepts, such as a natural-deduction style structure and resolution.
Section~\ref{sec:coreconcepts} informally introduces those concepts It is split into two parts: the proof language and a set of rules.
and also shows the notations used for them along the way. Section~\ref{sec:language} introduces the language. First informally,
In the subsequent Section~\ref{sec:semantic} we will see the formal then formally. Section~\ref{sec:rules-general} discusses some core
semantic of the concepts introduced before. concept behind Alethe proof rules.
At the end of the document Section~\ref{sec:rules} contains a full list
of all proof rules used by SMT solvers supporting Alethe.
The concrete syntax and semantic is based on the SMT-LIB~\cite{SMTLIB} The concrete syntax and semantic is based on the SMT-LIB~\cite{SMTLIB}
format, which is widely used as input and output format of SMT format, which is widely used as input and output format of SMT
...@@ -377,15 +382,12 @@ More recently the format has gained support for reasoning typically used ...@@ -377,15 +382,12 @@ More recently the format has gained support for reasoning typically used
for processing, such as Skolemization, renaming of variables, and other for processing, such as Skolemization, renaming of variables, and other
manipulations of bound variables~\cite{barbosa-2019}. manipulations of bound variables~\cite{barbosa-2019}.
At the end of the document Section~\ref{sec:rules} contains a full list \section{The Alethe Language}
of all proof rules used by SMT solvers supporting Alethe. \label{sec:language}
\section{Core Concepts of Alethe}
\label{sec:coreconcepts}
This section provides an overview of the core concepts of the proof This section provides an overview of the core concepts of the Alethe
format and also introduces some notation used throughout this paper. language and also introduces some notation used throughout this document.
While the next section provides a formal semantic for the format, While the next section provides a formal definition of the language,
this overview of the core concept should be very helpful for this overview of the core concept should be very helpful for
practitioners. practitioners.
...@@ -404,7 +406,8 @@ such that $\varphi[v/x]$ is true if such a value exists. Any value is ...@@ -404,7 +406,8 @@ such that $\varphi[v/x]$ is true if such a value exists. Any value is
possible otherwise. Alethe requires that $\varepsilon$ is functional possible otherwise. Alethe requires that $\varepsilon$ is functional
with respect to logical equivalence: if for two formulas $\varphi$, $\psi$ with respect to logical equivalence: if for two formulas $\varphi$, $\psi$
that contain the free variable $x$, it holds that that contain the free variable $x$, it holds that
$(\forall x.\,\varphi\leftrightarrow\psi)$, then $(\varepsilon x.\, \varphi)\simeq (\varepsilon x.\, \psi)$. $(\forall x.\,\varphi\leftrightarrow\psi)$,
then $(\varepsilon x.\, \varphi)\simeq (\varepsilon x.\, \psi)$.
As a matter of notation, we use the symbols $x$, $y$, $z$ for variables, As a matter of notation, we use the symbols $x$, $y$, $z$ for variables,
$f$, $g$, $h$ for functions, and $P$, $Q$ for predicates, i.e., $f$, $g$, $h$ for functions, and $P$, $Q$ for predicates, i.e.,
...@@ -439,7 +442,7 @@ understand the proof intuitively. ...@@ -439,7 +442,7 @@ understand the proof intuitively.
\end{example} \end{example}
\paragraph{Steps.} \paragraph{Steps.}
A proof in the Alethe format is an indexed list of steps. A proof in the Alethe language is an indexed list of steps.
To mimic the concrete syntax we write a step in the form To mimic the concrete syntax we write a step in the form
\begin{plListEq} \begin{plListEq}
c_1,\,\dots,\, c_k &\proofsep i. & \varphi_1,\dots ,\varphi_l & c_1,\,\dots,\, c_k &\proofsep i. & \varphi_1,\dots ,\varphi_l &
...@@ -465,10 +468,8 @@ acyclic graph. The arguments $a_i$ are either terms or tuples $(x_i, ...@@ -465,10 +468,8 @@ acyclic graph. The arguments $a_i$ are either terms or tuples $(x_i,
t_i)$ where $x_i$ is a variable and $t_i$ is a term. The interpretation t_i)$ where $x_i$ is a variable and $t_i$ is a term. The interpretation
of the arguments is rule specific. The list \(c_1, \dots, c_k\) is of the arguments is rule specific. The list \(c_1, \dots, c_k\) is
the \emph{context} of the step. Contexts have their own section below. the \emph{context} of the step. Contexts have their own section below.
Every proof ends with a step that has the empty clause as the conclusion Every proof ends with a step that has the empty clause as the conclusion
and an empty context. Section~\ref{sec:rules} provides an overview of and an empty context.
all proof rules used by veriT.
The example above consists of five steps. Step 4 and 5 use premises. The example above consists of five steps. Step 4 and 5 use premises.
Since step 3 introduces a tautology, it uses no premises. However, Since step 3 introduces a tautology, it uses no premises. However,
...@@ -477,11 +478,11 @@ the quantifier. Step 4 translates the disjunction into a clause. ...@@ -477,11 +478,11 @@ the quantifier. Step 4 translates the disjunction into a clause.
In the example above, the contexts are all empty. In the example above, the contexts are all empty.
\paragraph{Assumptions.} \paragraph{Assumptions.}
The \proofRule{assume} rule introduces a term as an assumption. The The \proofRule{assume} command introduces a term as an assumption. The
proof starts with a number of \proofRule{assume} steps. Each step such proof starts with a number of \proofRule{assume} commands. Each such command
corresponds to an assertion. Additional assumptions can be introduced corresponds to an input assertion. Additional assumptions can be introduced
too. In this case each assumption must be discharged with an appropriate too. In this case each assumption must be discharged with an appropriate
step. The only rule to do so is the \proofRule{subproof} rule. step. The rule \proofRule{subproof} can be used to do so.
The example above uses two assumptions which are introduced in the first The example above uses two assumptions which are introduced in the first
two steps. two steps.
...@@ -489,7 +490,7 @@ two steps. ...@@ -489,7 +490,7 @@ two steps.
\paragraph{Subproofs and Lemmas.} \paragraph{Subproofs and Lemmas.}
Alethe uses subproof to prove lemmas and to manipulate the context. Alethe uses subproof to prove lemmas and to manipulate the context.
To prove lemmas, a subproof can To prove lemmas, a subproof can
introduce local assumptions. The subproof \emph{rule} discharges the introduce local assumptions. The \proofRule{subproof} \emph{rule} discharges the
local assumptions. local assumptions.
From an From an
assumption $\varphi$ and a formula $\psi$ proved by intermediate steps assumption $\varphi$ and a formula $\psi$ proved by intermediate steps
...@@ -532,8 +533,8 @@ lemma \(((2 + 2) \simeq 5) \Rightarrow 4 \simeq 5\). ...@@ -532,8 +533,8 @@ lemma \(((2 + 2) \simeq 5) \Rightarrow 4 \simeq 5\).
\end{example} \end{example}
\paragraph{Contexts.} \paragraph{Contexts.}
A specialty of the veriT proof A specialty of the Alethe proofs
format is the step context. The context is a possible empty list $[c_1, is the step context. The context is a possible empty list $[c_1,
\dots, c_l]$, where $c_i$ is either a variable or a variable-term tuple \dots, c_l]$, where $c_i$ is either a variable or a variable-term tuple
denoted $x_i\mapsto t_i$. In the first case, we say that $c_i$ \emph{fixes} its denoted $x_i\mapsto t_i$. In the first case, we say that $c_i$ \emph{fixes} its
variable. Throughout this document $\Gamma$ denotes variable. Throughout this document $\Gamma$ denotes
...@@ -555,161 +556,6 @@ substitution for $x_n$. The following example illustrates this idea: ...@@ -555,161 +556,6 @@ substitution for $x_n$. The following example illustrates this idea:
\operatorname{subst}([x\mapsto 7, x, x \mapsto g(x)]) &= [g(x)/x]\\ \operatorname{subst}([x\mapsto 7, x, x \mapsto g(x)]) &= [g(x)/x]\\
\end{align*} \end{align*}
\paragraph{Tautologous Rules and Simple Deduction.}
Most rules introduce tautologies. One example is
the \proofRule{and\_pos} rule: $\neg (\varphi_1 \land \varphi_2 \land
\dots \land \varphi_n) \lor \varphi_i$. Other rules operate on only
one premise. Those rules are primarily used to simplify Boolean
connectives during preprocessing. For example, the \proofRule{implies}
rule removes an implication: From $\varphi_1 \Rightarrow \varphi_2$
it deduces $\neg \varphi_1 \lor \varphi_2$.
\paragraph{Resolution.}
CDCL($T$)-based SMT solvers, and especially their SAT solvers,
are fundamentally based on clauses. Hence, Alethe also uses
clauses. Nevertheless, since SMT solvers do not enforce a strict
clausal normal-form, ordinary disjunction is also used.
Keeping clauses and disjunctions distinct, simplifies rule checking.
For example, in the case of
resolution there is a clear distinction between unit clauses where
the sole literal starts with a disjunction, and non-unit clauses. The
syntax for clauses uses the \smtinline{cl} operator, while disjunctions
use the standard SMT-LIB \smtinline{or} operator. The \proofRule{or}
\emph{rule} is responsible for converting disjunctions into clauses.
The Alethe proofs use a generalized propositional
resolution rule with the rule name \proofRule{resolution} or
\proofRule{th\_resolution}. Both names denote the same rule. The
difference only serves to distinguish if the rule was introduced by
the SAT solver or by a theory solver. The resolution step is purely
propositional; there is no notion of a unifier.
The premises of a resolution step are clauses and the conclusion
is a clause that has been derived from the premises by some binary
resolution steps.
\begin{comment}{Hans-Jörg Schurr}
We have to clarify that resolution counts the number of leading negations
to determine polarity. This is important for double negation elimination.
Furthermore, as Pascal noted, we should also fold the two rules into one
and use an attribute to distinguish the two cases.
\end{comment}
\paragraph{Quantifier Instantiation.}
To express quantifier instantiation, the rule \proofRule{forall\_inst}
is used. It produces a formula of the form $(\neg \forall x_1 \dots
x_n\,.\,\varphi)\lor \varphi[t_1/x_1]\dots[t_n/x_n]$, where $\varphi$ is
a term containing the free variables $x_1$, \dots $x_n$, and $t_i$
is a new variable free term with the same sort as $x_i$ for each $i$.
The arguments of a \proofRule{forall\_inst} step is the list \((x_1 , t_1),
\dots, (x_n, t_n)\). While this information can be recovered from the term,
providing it explicitly helps reconstruction because the implicit reordering of
equalities (see below) obscures which terms have been used as instances.
Existential quantifiers are handled by Skolemization.
\paragraph{Linear Arithmetic.}
Proofs for linear arithmetic use a number of straightforward rules,
such as \proofRule{la\_totality}: $t_1 \leq t_2 \lor t_2 \leq t_1$
and the main rule \proofRule{la\_generic}. The conclusion of an
\proofRule{la\_generic} step is a tautology $\neg \varphi_1, \neg
\varphi_2, \dots, \neg \varphi_n$ where the $\varphi_i$ are linear
(in)equalities. Checking the validity of these formulas amounts to
checking the unsatisfiability of the system of linear equations
$\varphi_1, \varphi_2, \dots, \varphi_n$. The annotation of an
\proofRule{la\_generic} step contains a coefficient for each (in)equality.
The result of forming the linear combination of the literals with
the coefficients is a trivial inequality between constants.
\begin{example}
The following example is the proof for the unsatisfiability
of $(x+y<1) \lor (3<x)$, $x\simeq 2$, and $0\simeq y$.
\begin{plListEq}
\proofsep& 1.& (3 < x) \lor (x + y < 1) &(\proofRule{assume})\\
\proofsep& 2.& x\simeq 2 &(\proofRule{assume})\\
\proofsep& 3.& 0\simeq y &(\proofRule{assume})\\
\proofsep& 4.& (3 < x), (x + y < 1) &(\proofRule{or};1)\\
\proofsep& 5.& \neg (3<x), \neg (x\simeq 2) &(\proofRule{la\_generic};\: ;1.0, 1.0)\\
\proofsep& 6.& \neg (3<x) &(\proofRule{resolution}; 2, 5)\\
\proofsep& 7.& x + y < 1 &(\proofRule{resolution}; 4, 6)\\
\proofsep& 8.& \neg (x + y < 1), \neg (x\simeq 2) \lor \neg (0 \simeq y)
&(\proofRule{la\_generic};\: ;1.0, -1.0, 1.0)\\
\proofsep& 9.& \bot &(\proofRule{resolution};\:\; 8, 7, 2, 3)
\end{plListEq}
\end{example}
\paragraph{Subproofs and Lemmas.}
The proof format uses subproof to prove lemmas and to manipulate the context.
To prove lemmas, a subproof can
introduce local assumptions. The subproof \emph{rule} discharges the
local assumptions by concluding with an implication (written as a clause)
which has the local assumptions as its antecedents.
A step can only use steps from the same subproof as its premise. It
is not possible to have premises from either a subproof at a deeper
level or from an outer level.
As the example below shows, our notation for subproofs is a
frame around the rules within the subproof. Subproofs are also used to
manipulate the context.
\begin{example} This example show a contradiction proof for the
formula \((2 + 2) \simeq 5\). The proof uses a subproof to prove the
lemma \(((2 + 2) \simeq 5) \Rightarrow 4 \simeq 5\).
\begin{plContainer}
\begin{plList}
\proofsep& 1.& (2 + 2) \simeq 5 &(\proofRule{assume})\\
\end{plList}
\begin{plSubList}
\proofsep& 2.& (2 + 2) \simeq 5 &(\proofRule{assume})\\
\proofsep& 3.& (2 + 2) \simeq 4 &(\proofRule{plus\_simplify})\\
\proofsep& 4.& 4 \simeq 5 &(\proofRule{trans}; 1, 2)\\
\end{plSubList}
\begin{plList}
\proofsep& 5.& \neg((2 + 2) \simeq 5), 4 \simeq 5 &(\proofRule{subproof})\\
\proofsep& 6.& (4 \simeq 5)\leftrightarrow \bot &(\proofRule{eq\_simplify})\\
\proofsep& 7.& \neg((4 \simeq 5)\leftrightarrow \bot), \neg(4\simeq 5), \bot &(\proofRule{equiv\_pos2})\\
\proofsep& 8.& \bot &(\proofRule{resolution}; 1, 5, 7, 8)\\
\end{plList}
\end{plContainer}
\end{example}
\paragraph{Contexts.}
A specialty of the veriT proof
format is the step context. The context is a possible empty list $[c_1,
\dots, c_l]$, where $c_i$ is either a variable or a variable-term tuple
denoted $x_i\mapsto t_i$. Throughout this document $\Gamma$ denotes
an arbitrary context. The context denotes a substitution.%\todo{Extend}
\paragraph{Skolemization and Other Preprocessing Steps.}
One typical example for a rule with context is the \proofRule{sko\_ex}
rule, which is used to express Skolemization of an existentially
quantified variable. It is a applied to a premise $n$ with a context
that maps a variable $x$ to the appropriate Skolem term and produces
a step $m$ ($m > n$) where the variable is quantified.
\begin{plListEq}
\Gamma, x\mapsto (\varepsilon x.\varphi) &\proofsep n.&\varphi \simeq \psi &(\proofRule{\dots})\\
\Gamma &\proofsep m.& (\exists x.\varphi) \simeq \psi &(\proofRule{sko\_ex};\: n)
\end{plListEq}
\begin{example}
To illustrate how such a rule is applied, consider the following example
taken from~\cite{barbosa-2019}. Here the term $\neg p(\varepsilon x.\neg
p(x))$ is Skolemized. The \proofRule{refl} rule expresses
a simple tautology on the equality (reflexivity in this case), \proofRule{cong}
is functional congruence, and \proofRule{sko\_forall} works like
\proofRule{sko\_ex}, except that the choice term is $\varepsilon x.\neg\varphi$.
\begin{plListEqAlgn}
x\mapsto (\varepsilon x.\neg p(x)) \proofsep& 1.& x &\simeq \varepsilon x.\neg p(x) &(\proofRule{refl})\\
x\mapsto (\varepsilon x.\neg p(x)) \proofsep& 2.& p(x) &\simeq p(\varepsilon x.\neg p(x)) &(\proofRule{cong};\: 1)\\
\proofsep& 3.&( \forall x.p(x))&\simeq p(\varepsilon x.\neg p(x)) &(\proofRule{sko\_forall};\: 2)\\
\proofsep& 4.&(\neg\forall x.p(x))&\simeq \neg p(\varepsilon x.\neg p(x)) &(\proofRule{cong};\: 3)
\end{plListEqAlgn}
\end{example}
\paragraph{Implicit Reordering of Equalities.} \paragraph{Implicit Reordering of Equalities.}
In addition to the explicit steps, solvers might reorder equalities, i.e., In addition to the explicit steps, solvers might reorder equalities, i.e.,
apply symmetry of the equality predicate, without generating steps. apply symmetry of the equality predicate, without generating steps.
...@@ -722,13 +568,13 @@ have an arbitrary order after instantiation. ...@@ -722,13 +568,13 @@ have an arbitrary order after instantiation.
Nevertheless, consumers of Alethe must consider the possible Nevertheless, consumers of Alethe must consider the possible
implicit reordering of equalities. implicit reordering of equalities.
\section{The Semantics of Alethe} \subsection{The Semantics of the Alethe Language}
\label{sec:semantic} \label{sec:semantic}
Most of the content is taken from the presentation and the correctness proof of Most of the content is taken from the presentation and the correctness proof of
the format~\cite{barbosa-2019}. the format~\cite{barbosa-2019}.
\subsection{Abstract Inference System} % or Rules, or Rules of Inference \subsubsection{Abstract Inference System} % or Rules, or Rules of Inference
\label{sec:inference-system} \label{sec:inference-system}
The inference rules used by our framework depend on a notion of The inference rules used by our framework depend on a notion of
...@@ -950,7 +796,7 @@ The following derivation tree justifies the expansion of a `let' expression: ...@@ -950,7 +796,7 @@ The following derivation tree justifies the expansion of a `let' expression:
\] \]
\end{example} \end{example}
\subsection{Correctness} \subsubsection{Correctness}
\label{sec:correctness} \label{sec:correctness}
\newcommand\thys{\mathcal T_1 \mathrel\cup \cdots \mathrel\cup \mathcal T_n \mathrel\cup {\simeq} \mathrel\cup \Choicename \mathrel\cup \mathrm{let}} \newcommand\thys{\mathcal T_1 \mathrel\cup \cdots \mathrel\cup \mathcal T_n \mathrel\cup {\simeq} \mathrel\cup \Choicename \mathrel\cup \mathrm{let}}
...@@ -966,7 +812,7 @@ The following derivation tree justifies the expansion of a `let' expression: ...@@ -966,7 +812,7 @@ The following derivation tree justifies the expansion of a `let' expression:
\end{proof} \end{proof}
\subsection{More Concrete Rules} \subsubsection{More Concrete Rules}
\label{sec:concrete-rules} \label{sec:concrete-rules}
The high-level rules presented in the previous paragraph are useful for The high-level rules presented in the previous paragraph are useful for
...@@ -1005,7 +851,7 @@ instantiation and resolution steps (line 10--15)}\label{fig:proof_ex} ...@@ -1005,7 +851,7 @@ instantiation and resolution steps (line 10--15)}\label{fig:proof_ex}
% The above section actually does not talk much about semantics if % The above section actually does not talk much about semantics if
% it does not relate to logic, interpretation, etc... % it does not relate to logic, interpretation, etc...
\section{The Concrete Syntax} \subsection{The Syntax}
\label{sec:syntax} \label{sec:syntax}
The concrete text representation of the Alethe proofs The concrete text representation of the Alethe proofs
...@@ -1204,7 +1050,128 @@ $\grT{define-fun}$ command to define shorthand 0-ary functions for ...@@ -1204,7 +1050,128 @@ $\grT{define-fun}$ command to define shorthand 0-ary functions for
the $\grT{(choice }\dots\grT{)}$ terms needed. Without this option, the $\grT{(choice }\dots\grT{)}$ terms needed. Without this option,
no $\grT{define-fun}$ commands are issued and the constants are expanded. no $\grT{define-fun}$ commands are issued and the constants are expanded.
\section{Classifications of the Proof Rules} \section{The Alethe Rules}
\label{sec:rules-generic}
Together with the language, the Alethe format also comes with a set
of proof rule. Section~\ref{sec:rules} gives a full list of all
proof rules. Currently, the proof rules correspond to the rules used
by the veriT solver. For the rest of this section, we will discuss some
general concepts related to the rules.
\paragraph{Tautologous Rules and Simple Deduction.}
Most rules introduce tautologies. One example is
the \proofRule{and\_pos} rule: $\neg (\varphi_1 \land \varphi_2 \land
\dots \land \varphi_n) \lor \varphi_i$. Other rules operate on only
one premise. Those rules are primarily used to simplify Boolean
connectives during preprocessing. For example, the \proofRule{implies}
rule removes an implication: From $\varphi_1 \Rightarrow \varphi_2$
it deduces $\neg \varphi_1 \lor \varphi_2$.
\paragraph{Resolution.}
CDCL($T$)-based SMT solvers, and especially their SAT solvers,
are fundamentally based on clauses. Hence, Alethe also uses
clauses. Nevertheless, since SMT solvers do not enforce a strict
clausal normal-form, ordinary disjunction is also used.
Keeping clauses and disjunctions distinct, simplifies rule checking.
For example, in the case of
resolution there is a clear distinction between unit clauses where
the sole literal starts with a disjunction, and non-unit clauses. The
syntax for clauses uses the \smtinline{cl} operator, while disjunctions
use the standard SMT-LIB \smtinline{or} operator. The \proofRule{or}
\emph{rule} is responsible for converting disjunctions into clauses.
The Alethe proofs use a generalized propositional
resolution rule with the rule name \proofRule{resolution} or
\proofRule{th\_resolution}. Both names denote the same rule. The
difference only serves to distinguish if the rule was introduced by
the SAT solver or by a theory solver. The resolution step is purely
propositional; there is no notion of a unifier.
The premises of a resolution step are clauses and the conclusion
is a clause that has been derived from the premises by some binary
resolution steps.
\begin{comment}{Hans-Jörg Schurr}
We have to clarify that resolution counts the number of leading negations
to determine polarity. This is important for double negation elimination.
Furthermore, as Pascal noted, we should also fold the two rules into one
and use an attribute to distinguish the two cases.
\end{comment}
\paragraph{Quantifier Instantiation.}
To express quantifier instantiation, the rule \proofRule{forall\_inst}
is used. It produces a formula of the form $(\neg \forall x_1 \dots
x_n\,.\,\varphi)\lor \varphi[t_1/x_1]\dots[t_n/x_n]$, where $\varphi$ is
a term containing the free variables $x_1$, \dots $x_n$, and $t_i$
is a new variable free term with the same sort as $x_i$ for each $i$.
The arguments of a \proofRule{forall\_inst} step is the list \((x_1 , t_1),
\dots, (x_n, t_n)\). While this information can be recovered from the term,
providing it explicitly helps reconstruction because the implicit reordering of
equalities (see below) obscures which terms have been used as instances.
Existential quantifiers are handled by Skolemization.
\paragraph{Linear Arithmetic.}
Proofs for linear arithmetic use a number of straightforward rules,
such as \proofRule{la\_totality}: $t_1 \leq t_2 \lor t_2 \leq t_1$
and the main rule \proofRule{la\_generic}. The conclusion of an
\proofRule{la\_generic} step is a tautology $\neg \varphi_1, \neg
\varphi_2, \dots, \neg \varphi_n$ where the $\varphi_i$ are linear
(in)equalities. Checking the validity of these formulas amounts to
checking the unsatisfiability of the system of linear equations
$\varphi_1, \varphi_2, \dots, \varphi_n$. The annotation of an
\proofRule{la\_generic} step contains a coefficient for each (in)equality.
The result of forming the linear combination of the literals with
the coefficients is a trivial inequality between constants.
\begin{example}
The following example is the proof for the unsatisfiability
of $(x+y<1) \lor (3<x)$, $x\simeq 2$, and $0\simeq y$.
\begin{plListEq}
\proofsep& 1.& (3 < x) \lor (x + y < 1) &(\proofRule{assume})\\
\proofsep& 2.& x\simeq 2 &(\proofRule{assume})\\
\proofsep& 3.& 0\simeq y &(\proofRule{assume})\\
\proofsep& 4.& (3 < x), (x + y < 1) &(\proofRule{or};1)\\
\proofsep& 5.& \neg (3<x), \neg (x\simeq 2) &(\proofRule{la\_generic};\: ;1.0, 1.0)\\
\proofsep& 6.& \neg (3<x) &(\proofRule{resolution}; 2, 5)\\
\proofsep& 7.& x + y < 1 &(\proofRule{resolution}; 4, 6)\\
\proofsep& 8.& \neg (x + y < 1), \neg (x\simeq 2) \lor \neg (0 \simeq y)
&(\proofRule{la\_generic};\: ;1.0, -1.0, 1.0)\\
\proofsep& 9.& \bot &(\proofRule{resolution};\:\; 8, 7, 2, 3)
\end{plListEq}
\end{example}
\paragraph{Skolemization and Other Preprocessing Steps.}
One typical example for a rule with context is the \proofRule{sko\_ex}
rule, which is used to express Skolemization of an existentially
quantified variable. It is a applied to a premise $n$ with a context
that maps a variable $x$ to the appropriate Skolem term and produces
a step $m$ ($m > n$) where the variable is quantified.
\begin{plListEq}
\Gamma, x\mapsto (\varepsilon x.\varphi) &\proofsep n.&\varphi \simeq \psi &(\proofRule{\dots})\\
\Gamma &\proofsep m.& (\exists x.\varphi) \simeq \psi &(\proofRule{sko\_ex};\: n)
\end{plListEq}
\begin{example}
To illustrate how such a rule is applied, consider the following example
taken from~\cite{barbosa-2019}. Here the term $\neg p(\varepsilon x.\neg
p(x))$ is Skolemized. The \proofRule{refl} rule expresses
a simple tautology on the equality (reflexivity in this case), \proofRule{cong}
is functional congruence, and \proofRule{sko\_forall} works like
\proofRule{sko\_ex}, except that the choice term is $\varepsilon x.\neg\varphi$.
\begin{plListEqAlgn}
x\mapsto (\varepsilon x.\neg p(x)) \proofsep& 1.& x &\simeq \varepsilon x.\neg p(x) &(\proofRule{refl})\\
x\mapsto (\varepsilon x.\neg p(x)) \proofsep& 2.& p(x) &\simeq p(\varepsilon x.\neg p(x)) &(\proofRule{cong};\: 1)\\
\proofsep& 3.&( \forall x.p(x))&\simeq p(\varepsilon x.\neg p(x)) &(\proofRule{sko\_forall};\: 2)\\
\proofsep& 4.&(\neg\forall x.p(x))&\simeq \neg p(\varepsilon x.\neg p(x)) &(\proofRule{cong};\: 3)
\end{plListEqAlgn}
\end{example}
\subsection{Classifications of the Rules}
\label{sec:rules-overview} \label{sec:rules-overview}
Section~\ref{sec:rules} provides a list of all proof rules supported by Section~\ref{sec:rules} provides a list of all proof rules supported by
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment