Chair of Computer Architecture

University Freiburg

Lecture 02

img/Untitled.png

img/Untitled%201.png

In the 1950s, Martin Davis and Hillary Putnam attempted to solve Hilbert’s 10th problem, which aimed to solve non-linear integer equations known as the Diophantine equations. This problem was conjectured by Hilbert around 120 years ago and is considered undecidable, meaning no algorithm for it can solve it completely. Davis and Putnam did not receive funding to solve this problem but were funded by the NSA to work on reasoning with logic. They developed two algorithms, the DP and DLL procedures, published in 1961 and 1962 in separate papers. These procedures were closely related, and due to Martin Davis’s involvement in both of them, they are now known as the DPLL procedure. This procedure is considered a cornerstone of logic and computer science.

The DPLL procedure was considered state-of-the-art in the 1960s, and people thought SAT was impracticable. However, since the mid-1990s, significant breakthroughs in SAT algorithms and heuristics have led to practical SAT solvers. Today, the DPLL procedure is one of the fundamental algorithms used in SAT solvers and is widely used in industry and academia.

img/Untitled%202.png

$C \cup D$ is the resolvent of the first two. At the right is the side-condition: variables $C$ and $D$ do not have these variables. Above the line, there are the antecedents. Below the line, there are the consequents.

img/Untitled%203.png

Correctness is important in resolution because it ensures that the inference method is reliable and accurate.

img/Untitled%204.png

Completeness is important in resolution because if the original set of clauses is inconsistent, the empty clause can be derived. This makes resolution a highly essential inference method. In other words, a method that is not complete cannot prove unsat.

Note that you are not guaranteed to have the same model before and after performing resolution.

img/Untitled%205.png

In 1, we start with the formula in CNF. If the formula starts with a contradiction, i.e., empty clause, return unsat.

If one OR’d clause is empty, the whole formula is false. If the whole thing is empty, then the formula is true.

Points 2. and 3. are optimizations called the pure literal rule. Pure literals are the variables that occur only positively or only negatively.

Pure literal rule is not logically equivalent: you cannot just set this pure literal to a value. It rules out models. But it is satisfiability preserving, so it’s a valid rule in the sense of not using solutions if we drop these clauses in which these pure literals occur.

In 5, we pick a variable. We know that this variable occurs both positively and negatively, then we do resolution, add all resolvents (step 6), and in step 7, we are allowed to remove all original clauses.

Ultimately, we will have an empty clause or formula (sat or unsat).

img/Untitled%206.png

On top, there’s $x$, on bottom $\bar{x}$, then do all the resolvents. $F’$ is the rest.

img/Untitled%207.png

Step 6 is quadratic in the worst case ⇒ worse than doubling, and even with doubling, you get exponential behavior. So this is a really bad exponential algorithm, but this was the first SAT algorithm they had

Today, resolution is used in preprocessing, e.g., in the Satch solver, but only if the number of resolvents is bounded, i.e., you do not add more clauses than you remove.

img/Untitled%208.png

Pseudocode version of the same thing we had before. There is no pure literal here because it is simulated.

img/Untitled%209.png

$a \oplus b = (a \vee b) \wedge (\lnot a \vee \lnot b)$

\[\lnot (a \oplus b) = \lnot(a \vee b) \vee \lnot (\lnot a \vee \lnot b) \\ = (\lnot a \wedge \lnot b) \vee (a \wedge b) \\ = (\lnot a \vee b) \wedge (a \vee \lnot b)\]

img/Untitled%2010.png

Notice that we do not get a counter-example here! We explain how to get one later.

Check Your Knowledge