Why Use-Cases Are Critical in SoC Verification
Verifying the correct behavior of small functional blocks and simple IP on a SoC is largely a solved problem. Techniques for creating tests that adequately exercise the functionality of these smaller blocks have been widely used for years. On the other hand, verifying the correct behavior of a SoC in its entirety remains a difficult and challenging problem.
Statistics from the 2014 Wilson Research Group Functional Verification Study indicate that 57% of the ASIC/IC development effort is spent in verification, yet half of all respins are due to logic or functional design errors that should have been found during the verification phase. Clearly, there is room for improvement in the verification process.
Typically, SoC verification has emphasized a directed functional testing approach that attempts to enumerate the essential functions of an IC, and then devise test cases that will exercise those functions. These tests usually focus on basic behaviors such as reading/writing from memory, bus operations, memory and power management functions, reset, and so on. Sometimes test vectors are derived systematically, targeting a particular function; at other times they are generated randomly in an attempt to cover a wider range of test scenarios. These approaches work well with block level designs and smaller pieces of IP, but they tend to break down in system level designs that have scores of these blocks interconnected in complex ways. Even if every block has been thoroughly tested using constrained random tests, the interconnections between the blocks result in systems with exponentially more functionality, requiring exponentially more testing. In SoC designs, for example, there are so many functional combinations that it may be impossible to test all of them given schedule and budget constraints.
Another problem with directed functional testing is that it yields few insights into performance bottlenecks that might appear in a SoC under real world scenarios. For example, directed tests can be used to verify the correct behavior of bus operations, and they may even give insight into how quickly the bus operations complete; but they don't expose bottlenecks that occur when multiple bus operations are concurrently competing for resources and bandwidth. Scenarios like this that often arise during production use of a SoC are often not exposed by directed tests.
Is there a better way? One approach that has worked well at Intrinsix involves the development of use-cases to test SoCs. Use-cases are essentially scenarios that represent how a SoC (or any other complex system) is used, often from the standpoint of an end-user.
For example, a simple use-case for a digital camera SoC might be specified as follows: take a picture with the camera, and store the image on the SD card (see illustration below). This involves a sequence of functional interactions, including:
- powering up the camera hardware, which might have been powered down to conserve battery power,
- capturing a raw image,
- storing the raw image in memory,
- requesting that the image processor converts the image to JPEG,
- waiting for the image processor to complete the conversion, then
- transferring the encoded image from memory to the SD card.
This use-case tests a whole series of functional behaviors that, combined, represents a typical real world usage of a digital camera (e.g. in a cell phone). It will test several subsystems, possibly exposing functional errors within them.
Also, this use-case can detect performance bottlenecks. For example, the image processor might not be able to respond as fast as the specification requires, or the fabric may not be able to handle the transfer speeds from memory through the SD Card Controller. Suppose the use-case is amended slightly to include multiple repeated image captures in burst mode. This will trigger overlapping operations on the bus, possibly exposing additional performance bottlenecks.
The image processing use-case is expressed from the standpoint of an end user, but most ICs have functionality that is targeted at other types of users - software developers, for example. It is important to develop and test use cases that cover critical functionality for these users, in addition to the end consumer.
Consider this example of a use-case for a software developer. Most custom ICs have a JTAG interface that enables hardware-assisted debugging. Software debuggers use the test access port (TAP) on the JTAG to communicate with the hardware during debug sessions; it's a crucial piece of functionality for software developers and must be fully operational in production ICs. So it's important to develop use-cases that test this functionality. One example of such a use-case might be to set and trigger a hardware breakpoint during the execution of a system software module.
Use-cases are not the silver bullet that solves all verification problems, but they provide a useful and efficient framework for testing SoCs for correct functionality, and to verify that they meet performance requirements. They are a proven and essential piece of the verification process at Intrinsix.
If your next project could use a helping hand, consider downloading the eBook titled “Five Criteria for Evaluation of Semiconductor Design Service Providers” as an aid in your process.