Chair of Computer Architecture

University Freiburg

Lecture 01a

In this lecture, we introduce several concepts and motivate the use of SAT.

Complex Systems

As hardware systems evolve, they become increasingly complex, driven by the demand for higher performance and increased functionality in modern devices. This demand has led to more complex circuitry and increasing transistors on a single chip. Consequently, the production and testing of chips have become more expensive and difficult.

Producing and testing a chip involves several processes. The first step is to design the chip. This process involves creating a blueprint of the circuitry to be etched onto a silicon wafer. Once the design is complete, the chip is fabricated using photolithography, which involves etching the circuitry onto a silicon wafer.

The next step is to test the chips on the wafer to filter out faulty chips. Testing involves applying electrical signals to the chip and measuring the response to ensure the circuitry functions correctly. Faulty chips are marked and discarded, while the functional chips are separated and packaged for shipment to device manufacturers.

The cost of filtering out faulty chips can be significant, especially when dealing with large-scale production. Additionally, some faults may not be detected during this initial testing process. Post-silicon debugging is a technique used to identify faults in chips after they have been produced. This process involves running additional tests on the chips after they have been installed in a device. This allows for more thorough testing and identifying faults that may not have been detected during initial testing.

Another technique used to address the challenges of faulty circuitry is the microcode. Microcode is a layer of software embedded in the chip to allow for the reconfiguration of the chip’s circuitry. This allows for the correction of faults that may be discovered after production. Microcodes can also implement new features or optimize performance without requiring a physical change to the chip’s circuitry.

SAT for Formal Verification

SAT solvers have become a crucial tool in formal verification and are used in many companies for verification and testing. There will be a high demand for people skilled in formal verification and automated reasoning in the future.

Before we can use SAT solvers, we must formally specify systems. Formal specification is a technique for building a mathematical model of a system on a high level of abstraction. This model captures the system’s essential features and can be used to reason about its behavior and properties. By specifying a system formally, one can potentially discover design flaws or logical inconsistencies that may be difficult to detect otherwise.

There are various methods for formal specification, including abstract state machines and the Vienna Development Method (VDM). Abstract state machines provide a way to describe the behavior of a system by defining a set of states and transitions between them. VDM, on the other hand, is a language for specifying the structure and behavior of a system using mathematical notation.

Some industries, such as aerospace, require formal specification methods to ensure the reliability and safety of critical systems. For example, the European Space Agency (ESA) uses the VDM method extensively in its spacecraft development process.

Synthesis

We can use SAT solvers to synthesize computer programs. By program synthesis, we mean generating a computer program from a mathematical specification. Ideally, the input to the synthesis process should be a precisely defined mathematical object, such as a set of equations or a formal specification. The output is a program that satisfies the specification and performs the desired computation. This technique is useful when it is difficult or time-consuming to write a program by hand or where the correctness of the program is critical.

Model Checking

Model checking formally verifies a hardware design’s correctness by analyzing its logical model. This involves defining specific properties that need to be checked, such as safety, liveness, and reachability, and then using a model checker to verify those properties.

The key advantage of model checking is that it enables engineers to identify potential errors and design flaws early in the development cycle before the system is built. This can save significant time and resources, allowing engineers to make corrections and optimizations before the design is finalized. In addition, model checking can help improve the system’s overall quality and reliability by ensuring it meets the necessary safety and performance requirements.

Model checking has become a standard practice in the hardware industry, with companies such as ARM and Apple heavily investing in the technology. This is because it provides an effective way to improve the efficiency and reliability of hardware design while reducing the risk of costly errors and delays. By using model checking, engineers can gain greater confidence in the correctness and safety of their designs, which can ultimately lead to better products and a more competitive edge in the marketplace.

Equivalence Checking

Equivalence checking ensures that what is inputted into a system is the same as what comes out. This involves comparing two versions of a hardware design, such as a register transfer level design (RTL) and a gate-level netlist, to ensure they are functionally equivalent. Equivalence checking is becoming more widely used in industry, as it verifies that changes made to a design do not affect its functionality.

Limits

Understanding how SAT solving algorithms work can be valuable for application developers, as some of the algorithms can be generalized to other contexts. However, using SAT solving in practice is not always fully automatic, and designers must understand the limits of the algorithms used in formal verification tools.

Check Your Knowledge