Chair of Computer Architecture

University Freiburg

Lecture 06

Following BDD-based encoding goes back to Shannon.

img/Untitled.png

img/Untitled%201.png

At the top, we have a cardinality constraint involving nine literals. These literals can have values of either 0 or 1. The constraint states that exactly two of these literals should be true, while at most three should be false.

The first step in this process is to build a Binary Decision Diagram (BDD). The BDD serves as a visual representation of the constraint and helps analyze and manipulate the logical relationships between the literals.

img/Untitled%202.png

Think of this as a circuit. The output is either 0 or 1.

img/Untitled%203.png

Complexity: at most, you do $n=9$ steps here. Tseitin encoding will be: 9 variables, at most $n/2$ because we can always normalize this, but all in all, rather $n^2$. You can do this for small $k$, but this encoding will blow up for larger ones.

img/Untitled%204.png

If-then-else: $(z \wedge x) \vee (\lnot z \wedge y)$ ⇒ yields four clauses, but you can also get away with three because of monotonicity.

Backjumping

img/Untitled%205.png

Richard Stallman introduced the concept of Backjumping, primarily in the context of constraint programming rather than SAT. Imagine it as a variation of the DPLL algorithm. Initially, we make a decision to split on variable $x$. If $x$ is set to true, we explore the left branch and make another decision on variable $y$, assigning it to true.

Yellow: Here, several operations occur, but we don not find a satisfying solution. Consequently, we backtrack. We realize that this particular configuration is infeasible, so we must take the second branch, which is often referred to as the “left and right branch” or a “discrepancy.” It implies that we had a heuristic expectation that this branch would yield a satisfying solution, but it did not. The absence of a satisfying solution in the right branch creates a discrepancy since we anticipated finding one.

Discrepancy search involves accepting the existence of such discrepancies and attempting to minimize their occurrence. We continue traversing the left branches, hoping for a satisfying solution, but ultimately find that no satisfying solution exists.

Choosing $y$ did not lead to a successful outcome. We revert all previous assignments made in the exploration trail and set $y$ to false. Assuming we know that variable $x$ was not utilized in the yellow part when encountering a conflict, we examine the reasons behind it. If $x$ was not involved, we can replay the yellow part on the right branch. There are two potential approaches: caching this information (which is not straightforward) or using the blue arrow as an alternative to traditional backtracking. Rather than backtracking to the pink node, we backtrack to the top, remove the assignment of $x$, and proceed with $\bar{x}$.

img/Untitled%206.png

Here: pick $\bar{y}$ instead of $x$

img/Untitled%207.png

All searches on the bottom are useless, so go directly up. This is a quadratic reduction, the $x$s were completely useless. We figured out what the real problem was, $y$, so make a decision on the top. Variables that were not used are going to be backtracked. It was called the non-chronological back-tracking layer, jumping over things that were used. It was called that because you do not follow the chronological order of decisions when you backtrack.

img/Untitled%208.png

Performing unit resolution with $-3$ leads to the removal of the first two clauses, and the third clause transforms into $(-1 , -2)$. This clause indicates that variable $3$ is the logical AND of variables $1$ and $2$. However, in the subsequent analysis, it becomes apparent that the set of clauses is unsatisfiable. Each clause eliminates at least one possible assignment, ultimately rendering all assignments impossible. The core of the problem lies within the bottom four clauses; the clauses on top are not necessary for understanding the issue. Choosing whether to include or exclude variable $3$ is entirely irrelevant because the AND gate can still be satisfied. The crucial aspect lies within the bottom part of the problem.

img/Untitled%209.png

img/Untitled%2010.png

You also get a conflict here. Picking variables to be false was, for a long time, the default heuristic.

img/Untitled%2011.png

Repeat on the right-hand side. Wasteful because we did not use -3 here.

img/Untitled%2012.png

We could mark this decision as being used. Backtracking: is -3 used? No, because we did not mark it. So we do not go to decision level 1 but to decision level 0.

img/Untitled%2013.png

The tree looks weird now. It is not a binary tree anymore.

img/Untitled%2014.png

After you pick bad decisions which are not useful, you can recover from them. Guess from Biere: not an exponential improvement, but quadratic (polynomial) improvement in terms of search space.

img/Untitled%2015.png

img/Untitled%2016.png

The implication graph is a hypergraph where the edges are sets of nodes. The nodes represent literals, which are variables with assigned values. In this case, the literals $a$ and $b$ were both assigned true, which implies that the literal $\bar{c}$ must also be true because the other literals in the clause are false. Therefore, $\bar{c}$ must be false because $a$ and $b$ are both true.

img/Untitled%2017.png

Another approach to visualizing this hypergraph is by assuming the existence of an implicit hyperedge. To do this, consider all the incoming nodes and negate them (denoting an implied symbol). Then, consider the conclusion, represented as $\bar{c}$, which is a single literal. This uniquely defines a clause.

$a \wedge b \Rightarrow \bar{c}$ is equivalent to $(\bar{a} \vee \bar{b} \vee \bar{c})$

In this representation, it is always assumed that the hyperedge generated this specific structure. To identify the responsible hyperedge, you would need to recognize the clause that corresponds to it. By negating all the incoming nodes (implied) and considering the conclusion $\bar{c}$ as a single literal, you can derive a clause that uniquely maps to this graphical notation. Thus, there exists a mapping between clauses and this graphical representation.

img/Untitled%2018.png

The implementation process is much simpler than the graphical notation. We iterate over each assigned literal, which is stored on the trail, and store a pointer to the clause that triggered the assignment.

For instance, if $c$ is assigned as false, we store a pointer to the clause $(\bar{a} \vee \bar{b} \vee \bar{c})$. This implicitly defines the hyperedge or, in the case of an ordinary graph, these two edges. Additionally, decisions will have a zero pointer (similar to a NULL reference in Java programming).

This graph structure is straightforward to implement. When making the assignment $\bar{c}$, we immediately know which clause was responsible for enforcing this assignment.

Interestingly, this graph traversal primarily occurs backward (or in reverse chronological order of assignments), which is sufficient for the SAT solver’s operation. It is a fortunate coincidence that this simplified backward traversal or upward graphical traversal is all that is needed.

The roots of these graphs, the nodes without incoming edges, represent the decision points in the SAT solver.

img/Untitled%2019.png

Another important aspect is that this approach provides a straightforward method for incorporating a learning mechanism. We can utilize it as a form of cache to optimize future search operations. Additionally, it enables the implementation of backjumping techniques.

img/Untitled%2020.png

img/Untitled%2021.png

This CNF (Conjunctive Normal Form) was generated using fuzzing techniques applied to PicoSAT, a solver used for security testing. Fuzzing is an important approach for exploring corner cases in order to identify vulnerabilities. In the context of logic, if a significant portion of the search space is determined to be unsatisfiable and there is an implementation error, claiming unsatisfiability without the proper justification might work most of the time. This is because a substantial portion of the search space is already unsatisfiable. However, there is a possibility of encountering a bug that overlooks a specific part of the search, resulting in a formula that should be satisfiable being deemed unsatisfiable due to the solver’s error. Fuzzing is performed to detect such bugs, and it is akin to searching for security vulnerabilities in software.

The formula generated through fuzzing was designed to encompass all the significant aspects of learning, which will be discussed in the following sections.

In this formula, there is a cut dividing it into two parts, labeled 3 and 4. Above this cut, we have elements 1 and 2, while below it, we have elements 5 and 6. Apart from this division, there is no connection between the elements. The heuristic employed here is to use a min cut strategy. Splitting the formula on 3 and 4 is deemed beneficial because it divides the formula into two parts. If this division results in unsatisfiability, then one part alone would be sufficient to determine unsatisfiability. On the other hand, if both parts remain satisfiable, it implies that splitting on 1 and 2 alone would lead to unsatisfiability above the cut, while splitting on 5 and 6 alone would result in unsatisfiability below the cut.

img/Untitled%2022.png

This image shows an implication graph, with decision levels marked by dotted lines in ascending order. Initially, there are no units in the original clause. Therefore, we must choose decision 3. After choosing 3, we get -3 4 0, the first clause in the container of references to clauses containing -3. This clause has 4 in it as the conclusion. In the graph, this clause is the only incoming edge to 4, meaning it is a binary clause that is responsible. The first clause with -4 is -4 -1 0, meaning 4 implies -1 and 4 implies -2. They also only have 1 incoming edge, so they are binary clauses.

img/Untitled%2023.png

In the original paper, they wrote it like this. They used $\kappa$, not $c$ for conflict.

img/Untitled%2022.png

There’s a $\kappa$ here, then you have to cut through these edges and all the incoming edges you need to negate, and that’s your clause. The first clause -3 1 2 0 is empty now in the SAT solver in this situation. What would happen next? This implication graph tells you how you arrived at this conflict, so if you start at this conflict, at this $\kappa$ and you go back through the implication graph, then you collect all the reasons, ultimately the decisions made to reach this conflict you would get here. The reason for getting into this conflict here is 3 was picked as a decision.

img/Untitled%2024.png

-3 4 and 3 -4 implies that 3 = 4

$F$ is a formula with clauses in it. Formula $F$ , together with decision 3, implies an empty clause semantically. This is what the implication graph tells us. Do a bit of logic: same as saying that $F$ implies -3. Crucial part: if you have that formula implies something you can syntactically strengthen the formula. Whenever you have $F$ implies $G$, then $F$ is the same as $F \wedge G$.

img/Untitled%2025.png

That means: we can have this part (blue) and put it at the end of the formula, single clause, or unit clause. This addition is called learning.

img/Untitled%2022.png

Assume we have -3 added. Let us look back at the implication graph. Added -3 here. Looking at the top, -3 actually means we shouldn’t have made this decision here because this unit, we first have to propagate it, there was no unit on decision level 0. That is why the solver would backtrack to decision level 0 and use the unit we just learned (blue). It has an incoming edge which means it’s not a decision, it’s a unit. Everything that is rooted in decision level 0 is actually a unit. Then we have a binary clause that has -4, -5, -6, and then we get here to the 4, 5, and 6 conflicts, which would be $\kappa$ again.

img/Untitled%2026.png

After adding the -3, we again have $F \equiv F \wedge -3$. Now we get a conflict, bottom immediately. With the same argument as before: $F$ implies $G$, then $F$ is the same as $F \wedge G$. We now know that we can add an empty clause to $F$.

img/Untitled%2027.png

The formula now becomes $F \wedge -3 \wedge \perp$

So we did one decision, learned a unit clause, and then we propagated again and got unsat. Later we will see that this is simulated by resolution, but that’s a different story.

This would be a good way. Splitting here on the cut variable is good. Now let us split on something bad, so we pick 1.

img/Untitled%2028.png

It is completely useless. The decision is -1. This clause on top only positive occurrence of 1. We only need to check clauses in which 1 occurs positively. Because it’s a ternary clause, nothing happens. Only if it were a binary clause, we would have a unit now.

Remember the trail and all these things we discussed before. We increase the decision level, assignment -1 (could be the decision the SAT solver does anyway, pick the smallest variable and set it to false). Nothing happens.

Now we face another decision, which is the same as the one before. We ask ourselves: why not just replay the whole thing? The answer is that, with -1 and 3 assigned, the first clause that triggers is -3 1 2 0. Since the first two are false in this clause, we need to assign 2 to true. This is also the first time we encounter a real hyperedge. When you encounter a node in the implication graph, you collect the incoming edges, go to the source node of those edges, and flip them. So, in this case, we would say 1 -3 2, which is exactly the first clause.

So this blue cut here which cuts a literal from its source node corresponds to the very first clause in the formula. Then 4 would be implied, and then you get a conflict.

Let us do the same reasoning as before. In the graphical notation, we would just go over this implication graph and go backward to the roots. Both roots, both decisions, are connected to the conflict, and therefore you would need to assume if both of them are made, then you get the empty clause.

img/Untitled%2029.png

$F \wedge (-1 \wedge 3)$ , and we get bottom. With the same reasoning: equivalent to saying $F \models \lnot (-1 \wedge 3)$. And that’s identical to $F \models (1 \vee -3)$ (de Morgan).

img/Untitled%2030.png

This is a clause. Remarkable! Why? If we now continue, the same argument as before:

img/Untitled%2031.png

Just add it to the list of clauses. It fits exactly into the data structure we already have, lists of clauses. This is a reason why CNF is so natural in this sense. The things which you shouldn’t visit (called “no-goods,” here -1 and 3 are no-goods, they cannot be true at the same time). And these no-goods are clauses, that’s why if you want to collect those no-goods, you want to learn them. It is good if you start with a CNF from the very beginning. Because if you add a clause to a CNF, it will again be a CNF.

That’s why SAT solvers are so efficient and used in other places. Other applications: guiding search which is even harder than NP would still get into this situation, adding these “lemmas”, and “no-goods”.

img/Untitled%2032.png

img/Untitled%2033.png

Another example is a complicated search on top of a computer algebra problem that involves logic and decisions: collect what’s not possible (ideals where variety is empty). Then would add these guys who are responsible for that as clauses. So should have on top CNF representation. It sounds technical with SAT, but using CNF is a fundamental insight. It is this yellow part here. When you make decisions, and choices, then the no-good is a clause.

img/Untitled%2028.png

We will add clause 1 -3. When backtracking in this example, we backtrack chronologically, like in DPLL, with one exception: we are already implementing the tail recursion optimization discussed earlier. Therefore, we backtrack to decision level 1 because $l=1$ was responsible for this conflict. Backtrack to the decision level of the literal in the learned clause with the second-highest decision level.

We just showed that thing on $l=2$ is not possible and $l=1$ is the second-highest. It is necessary to derive this conflict. Everything below is inconsistent. Drop it and backtrack to this decision level 1. So we backtracked and newly learned clause 1 -3. This clause now becomes propagating.

img/Untitled%2034.png

now: $F \wedge (1 \vee -3)$

Again argument: if we had blue before, we wouldn’t have decided down two slides before. We would have taken decision -1 and immediately used that learned clause to derive -3. That flips the decision we made, and it partially simulates DPLL.

So slide, before we made decision 3, then we learned 1 -3, we backtrack one level, and then -3 says: 3 has to be false because we just showed that it cannot be true. The same as before, propagating and finding a clause that is in the lower part of the formula has all false literals. Then we learn the only reason for that is decision -1, so we learn unit clause 1, and add it to the formula.

img/Untitled%2035.png

now: $F \wedge (1 \vee -3) \wedge 1$

Then on decision level 0, we have the unit. Arrow to the left of it is redundant because any root is just a unit and this only happens at decision level 0, but in principle we remember there’s a unit clause responsible for that one.

Then we’d get implications and a $\kappa$, empty clause again. Decision tree would look like the one on the left, not really a tree anymore in a certain sense. Then we get again the situation, now we learned the empty clause and can add it to the formula so the whole formula is unsat.

So this is idea of implication graphs: you get into these conflict and you try to find the reasons, then you add these reasons, then you change the decision tree in such a way that it would know already these learned clauses, then you would get an empty clause which is added to the formula in the end if it’s unsat.

img/Untitled%2036.png

Here in this run, we’ll actually do some backjumping. In the first run, we picked a cut variable, in the second run, we picked a variable on top a second decision, now we’re picking a variable on the bottom: -6. Doing unit resolution with -6, nothing happens.

Then we’re doing a good decision, the one with a cut. With exactly the same reasoning as before, we now know we learn the unit clause -3. But what should we pick as backtrack level? We’re going through the learned clause, -3 in this case and then you pick the second highest decision level.

What’s the second highest decision level of a unit clause? It’s 0. That’s why we’re jumping back to decision level 0, so we implicitly realize after learning this unit clause that this decision level 1 was completely useless and we can backjump to decision level 0. Mechanically, it’s really like that: do this analysis, learn this clause, then you go through the clause and you check the second highest decision level which is of course the highest decision level except the one where we find the conflict, and that’s now your backjump level.

img/Untitled%2037.png

We see that 6 doesn’t have a second branch here, the backjumping happened here. It’s almost as good as the first run except that we did decision -6 and checked whether we can propagate (we could not).

img/Untitled%2038.png

Learning makes algorithm non-recursive. If you want to apply DPLL on very large formulas (e.g. millions of variables), you would need to de-recursify the algorithm. It’s not that complicated, but you cannot implement an algorithm where you go a million deep. Limit is typically 10000. Would just get stack overflow, when step hits the heap. Therefore, it’s good to have a non-recursive version.

BCP = boolean constraint propagation

First of all, it’s non-recursive, there’s 3 functions which weren’t explained: bcp_queue_is_empty, deduce, backtrack. bcp_queue_is_empty is just propagation, hot spot of any SAT solver. Imagine this as walking the sparse matrix and trying to find unit and empty clauses. This is a confluent procedure. If the bcp queue is empty and all variables are decided, then return SAT.

Otherwise, you’re going to derive a conflict (= learned clause, no-good, lemma). Then take this conflict and determine backjump level, second highest level in this learned clause and if you cannot backtrack because you’re already on decision level 0, then you know the formula is UNSAT because it just learned an empty clause.

So this is a loop, no recursive calls to sat.

This is basic version CDCL (don’t have restarts, don’t have redos)

img/Untitled%2039.png

20 = unsat, 10 = sat

Satch has also this boolean constraint propagation. If analyzing the conflict (this was called deduce before) is not possible because you already reached the decision level 0, then you put res to 20 (code for unsat, also exit code for the programs).

img/Untitled%2040.png

Then here is trick which is hidden in slides: if no variables are unassigned and you propagated them all (otherwise, you wouldn’t have reached this part), then the formula should be SAT. Of course this needs strong invariance of the solver that you’re not missing propagation. You don’t need to check whether all clauses are satisfied, it’s enough that you’ve assigned all variables and propagated all of them without conflict.

img/Untitled%2041.png

These are optimizations, not really needed. If none are triggered, then decide is called: picking a new variable to assign to true and false.

img/Untitled%2042.png

Here propagate was our propagation function. If propagate failed, then trying to analyze. Then we have global flag which remembers whether we learned an empty clause (unsat variable). This is C++ now. That’s why global variable.

Satisfied: no unassigned variables left. Then optimizations we discussed before. Loop would also be the one you would use if you’re having a SAT solver driving some other search (e.g. computer algebra programming). Somewhere down in this function, the propagation would actually call a computer algebra program and then would say: “in this boolean combination it will not work”. Same true for SMT solvers, would also have top loop which looks exactly like that and more and more reasoning engines follow this principle ⇒ at the heart of automated reasoning.

Optimizations in other domains could be different, but propagate, analyze, check that you’re done satisfied, decide is the same in many applications with learning in the background which is behind analyze

img/Untitled%2043.png

Here: zoomed in into backtrack. This code is from PicoSAT.

analyze we discussed before ⇒ this traverses the implication graph and gets the learned clause, then we’ll add it to the solver. Then we’ll get the assignment from this learned clause (like the variable which gets flipped). We need to tell the propagator (assignment is put on the trail) to propagate. Then we determine the backjump level, it’s the second highest level in this clause and then we just backtrack.

img/Untitled%2044.png

Early: people realized you can see this as resolution.

Check Your Knowledge