Chair of Computer Architecture

University Freiburg

Lecture 04

img/Untitled.png

Static heuristics

A static decision heuristic is a strategy for selecting the next variable to assign a truth value during the search. Rather than making decisions based on the current state of the formula, a static heuristic precomputes a fixed order in which to consider variables.

One common approach is using a binary decision diagram (BDD), representing the formula as a directed acyclic graph. In a BDD, the variables are always split in the same order. Therefore, spending time upfront to design an effective decision heuristic is important.

Another strategy for selecting variables is to use tree decomposition. If the formula has a tree-like structure, it may be advantageous to split variables in the order of the tree during a depth-first search. This is known as a “linear” or “total order” strategy. However, tree decomposition is only useful in a limited number of cases.

A significant advantage of static decision heuristics is that the order in which variables are considered only needs to be computed once. This allows more complex and time-consuming algorithms to generate the decision order, such as finding cuts or tree decomposition, without slowing the search.

Dynamic heuristics

Dynamic decision heuristics refer to deciding which variable assignments to try next based on the current state of the problem. This approach is called “dynamic” because it considers the current state of the formula instead of a fixed, predetermined set of rules for making decisions.

One commonly used heuristic in dynamic decision-making is based on the number of occurrences of literals in the remaining clauses of the formula that are not yet satisfied. The literals that occur most frequently in these clauses are considered good candidates for making those clauses true. The formula can be reduced quickly and efficiently by eagerly trying to satisfy as many clauses as possible with each decision. This approach is particularly useful in the DPLL algorithm, which has an exponential time complexity because it helps avoid deep searches.

An alternative approach to dynamic decision-making involves adding counters to the clauses. Each clause is assigned a count of non-false literals, updated with each variable assignment propagation. When a count drops to 1, a unit is found, and when it drops to 0, a conflict is detected, indicating that the formula is unsatisfiable. However, this approach is also costly to maintain, particularly when dealing with large clauses, and can be further complicated when learning new clauses during the search process.

There are “first-order” and “second-order” decision heuristics. First-order heuristics rely only on the current state of the formula, while second-order heuristics consider the history of previous variable assignments. This means that when the algorithm returns to a previously visited state, the heuristic may choose a different variable assignment than it did the first time.

img/Untitled%201.png

The graph above is called a “variable incidence graph.”

Cut width involves representing a CNF as a graph, where the clauses of the CNF are nodes in the graph, and edges are drawn between nodes that share a variable. Connecting variables of opposite phases (e.g., 3 to -3) rather than literals is possible for a sparser representation.

The purpose of this graph is to capture all possible resolutions of the CNF. A cut in this graph refers to a set of variables that splits the graph into two disconnected parts. For example, if we include 3 as a cut, we will cut out variables 1 and 2 from the CNF formula.

Visualizing the graph, we can permute the clauses and then pull. By cutting along the variables, we separate the graph into two parts.

While cut-width heuristics are not commonly used, they can be useful for parallelization and counting in SAT solving. By identifying these cuts, we can derive a more efficient algorithm for solving the CNF.

Cut Width Algorithm

int sat (CNF cnf)
{
  SetOfVariables cut = generate_good_cut (cnf);
  CNF assignment, left, right;
  left = cut_off_left_part (cut, cnf);
  right = cut_off_right_part (cut, cnf);
  
  forall_assignments (assignment, cut)
  {
    if (sat (apply (assignment, left)) && sat (apply (assignment, right)))
      return 1;
  }
  
  return 0;
}

The key to the efficiency of the algorithm is finding good cuts. A good cut balances the two parts and requires a small number of variables to cut. If the cuts are small, the algorithm can be polynomial. However, the algorithm can become exponential if the cut sizes are large.

The cut width algorithm takes a CNF as input and recursively cuts the formula into two parts using a good cut. The algorithm then generates all possible assignments to the variables in the cut and applies them to the left and right parts of the formula. If both parts are satisfiable, the algorithm returns 1; otherwise, it returns 0.

Limiting the number of variables in a cut is important to ensure a polynomial runtime. If the number of variables in a cut is small, the loop generated by the algorithm will only have a limited number of iterations. For example, if the maximum cut size is 5, the loop will have 32 iterations (2^5), while a maximum cut size of 10 will result in 1024 iterations. Large cut sizes can make the algorithm inefficient.

The algorithm calls itself twice, so is it the same as DPLL? No, because left and right are half the incoming formula’s size. The same analysis applies in quicksort: left=, right= would be the partition part as in quicksort, and if= is just a recursive call of quicksort twice. So it is not quicksort $\mathcal{O}(n \log n)$, it is $\mathcal{O}(n \log n \cdot 2^{\text{max_cut_size}})$. If the number of variables in the cut is small, you get a polynomial algorithm.

The cut width algorithm is suitable for parallelization since the left and right parts of the formula are independent and can be solved in parallel. However, leaf coarsening is needed to avoid excessive parallelism. On top, you do these cut heuristics, but you might switch to another algorithm when you get closer to the leaves. In quicksort, you would switch to shellsort. The same applies here: you do it as often as you have cores. For example, with 16 cores, you must do 3 levels deep, corresponding to seven good cuts. Then solve the equations completely independently.

It is also essential to note that the quality of the cuts is a property of the formula, and good cuts can enable the algorithm to have polynomial runtime.

img/Untitled%202.png

If your formula is a tree, then the formula has small cuts. Every node has one parent.

img/Untitled%203.png

As we have seen with cuts, a formula can have properties that allow us to proceed in polynomial time. Horn is another such property. If the formula is in Horn, we can solve it in quadratic or linear without considering BCP time. A Horn clause is a clause that contains at most one positive literal.

Things in horn

One can first propagate all the units to decide the satisfiability of a Horn form formula. If this does not lead to a conflict, then the formula can be satisfied by assigning all remaining literals that are not assigned to true to false. This leads to a minimum model that satisfies all the implications and query premises.

c This CNF represents a set of Horn clauses:
c (a ∧ b) → c
c (b ∧ c) → d
c (¬c) → e
p cnf 5 3
-1 -2 3 0
-2 -3 4 0
-4 5 0

Positive in horn is arbitrary. We could, for every variable, fix a certain phase, and we call a formula “renamable Horn” if, for every clause, there is at most one literal, which is not in its preferred phase. It shares the same properties and can be decided in polynomial time by just propagation. The only downside to this extension is that by looking at the formula, you do not know yet if t is in renamable Horn. You need to find this default value whether you consider a variable true or false. Encoding that is NP-hard. So trying to figure out whether the formula is in renamable horn is as hard as solving the formula. Still, if it is in renamable Horn, DPLL solves this in polynomial time without us knowing whether the formula is in renamable horn.

img/Untitled%204.png

There are clauses with more than 2 positive literals (left part). Then some clauses have at most one positive literal (right part). We could have a polynomial algorithm if we had only the right part. A good decision heuristic is to try to shrink the left part. So we fall back to the occurrence heuristic and eagerly satisfy as many clauses here in the left part.

img/Untitled%205.png

Dynamic Largest Individual Sum (DLIS)

DLIS means you go through unsatisfied clauses and count how often a literal occurs.

This gives you only a literal heuristic but not a variable heuristic. DPLL has two heuristics: which variable to pick and which literal. So how do you turn literal heuristics into variable heuristics? You must merge two literals of one variable to get a variable heuristic. The simplest way is to add them up, i.e., add positive occurrences of variable and negative occurrences of variable and pick the variable that occurs most often.

Look-Ahead Heuristics

Look-forward heuristics refer to a technique used in look-ahead solvers, considered the best solvers for combinatorial benchmarks, about 20 years ago. This approach involves conducting a look-ahead at every search node instead of just at the root level, as is commonly done today. The process is similar to being lost in a city and peeking around every corner to see if it is a dead end. The main goal of this technique is to improve heuristic choices and avoid going in the wrong direction.

The look-ahead process involves trying all possible variables, assigning them, and propagating them. If a variable assignment fails, it is flipped and propagated, resulting in more simplification. If a significant number of variables fail, this can lead to substantial improvements in the solver.

An important optimization in this process is to keep track of variables that have been previously tried and have failed. This helps to avoid retrying these variables unnecessarily. However, if a previously failed variable is tried again and results in a successful assignment, other variables must be retried to ensure the new information is incorporated.

Implementing this process naively can result in a cubic algorithm, which is inefficient. However, it is still confluent, meaning that the result is unique and can be easily checked. Some optimizations can reduce the complexity to quadratic and even linear in some cases, although the worst-case scenario remains quadratic.

One such optimization is remembering when each literal was last propagated and avoiding retrying literals that have not encountered any new failed literals since their last propagation. This is achieved using a simple caching algorithm, which stores a timestamp for each propagated literal. This reduces the number of unnecessary propagations and can help improve the overall efficiency of the look-ahead solver.

Negation Normal Form

img/Untitled%206.png

Negation normal form (NNF) is a way of representing logical formulas where negation only appears in front of atomic propositions. Many SAT solvers use NNF as an intermediate form to simplify formulas and improve performance.

formula2nnf(Formula f, Boole sign) {
  if (is_variable(f)) {
    return sign ? new_not_node(f) : f;
  }
  if (op(f) == AND || op(f) == OR) {
    l = formula2nnf(left_child(f), sign);
    r = formula2nnf(right_child(f), sign);
    flipped_op = (op(f) == AND) ? OR : AND;
    return new_node(sign ? flipped_op : op(f), l, r);
  } else {
    assert(op(f) == NOT);
    return formula2nnf(child(f), !sign);
  }
}

The function first checks whether the formula is a variable. If so, it returns the negated or non-negated variable depending on the value of sign. If the formula is a conjunction or disjunction, the function recursively converts the left and right child formulas to NNF and combines them using the appropriate logical operator. If the formula is a negation, the function recursively converts the child formula to NNF and toggles the value of sign.

The assert statement is used to ensure that the formula is well-formed and that negation only appears in front of atomic propositions.

If the input format of the formula is a tree, then formula2nnf is linear because it just goes down and up again recursively. However, the algorithm becomes exponential if the input format is a DAG. By caching, we can make the algorithm linear again.

Untitled

Check Your Knowledge

What are the two main types of decision heuristics used in SAT solving?

Practical Exercises:

\[(x_1 \lor x_2 \lor \neg x_3) \land (\neg x_1 \lor x_3) \land (x_2 \lor \neg x_3 \lor x_4)\] \[(x_1 \lor \neg x_2) \land (\neg x_1 \lor x_3) \land (\neg x_2 \lor \neg x_3 \lor x_4)\] \[\neg (x_1 \land (\neg x_2 \lor x_3)) \land (x_4 \lor \neg(x_5 \land x_6))\] \[(x_4 \lor x_5) \land (\neg x_1 \lor x_2 \lor x_3) \land (x_3 \neg x_4) \land (\neg x_2 \lor x_1) \land (x_4 \lor \neg x_6 \lor x_7) \land (\neg x_3 \lor x_1)\] \[(x_1 \lor x_2) \land (\neg x_1 \lor x_3 \lor x_4) \land (\neg x_2 \lor \neg x_3) \land (x_1 \lor \neg x_4)\]