In an earlier post, I had given a small introduction on why performance verification is necessary for today's system on chips, along with a few key metrics that can be measured. Since any system will have multiple masters and multiple slaves, it is quite important to exercise these elements in various combinations such that the fabric is stressed and its internal arbiters and buffers are exhausted.
An interconnect is the backbone of any system as many processor cores, DMA, graphic engines, memory and other I/O devices connect to it. Performance requirements have undergone a steep climb in today's sophisticated world where electronic chips can be found everywhere including consumer appliances, healthcare, industrial controls, and automobiles. Whatever the field may be, the consumer always expect top notch performance without any visible lag or mediocre user experience. Hence, in recent years another field of verification has sprung up in additional to functional - performance.
UVM sequence macros are a great way of reducing code and hiding away some details.
`uvm_do macros enable a sequence item to be created, randomized and executed on a sequencer all from a single line of code.
`uvm_create is another macro which simply creates an object of a sequence item so that it can be handled later on. Let's see what the name of an object created by
`uvm_create would look like. Unlike a typical
type_id::create() method where you get to specify a required name,
`uvm_create does not have any, not that it matters, but just for trivia.
In the UVM world there exists a function to reset a register block within a model. This is a step that many beginners often overlook because the need to invoke this might not be very clear until errors crop up. The register model primarily holds three different kinds of values each for a different purpose. Let's see how the
reset() method affects each of them.
UVM register model is quite extensive and has many useful API that help query registers and fields based on their names. Typically, registers are accessed by hierarchical references and there may be a better alternative in cases where there is a consistent naming scheme applied to all registers in the model.
Creating a global singleton object that can be referenced from elsewhere in the testbench is sometimes a good thing. This is very similar to the way
static variables in a class work - only one variable is created and made accessible for all class objects. In this case, a single class object is created that can be accessed from other testbench components. As an example, you could have such a class object to contain all the design or testbench specification features like number of masters and slaves, or clock frequency requirements for each interface, etc. Let's see how to effectively create a singleton object.
Yes. UVM has a class-based dynamic queue that can be allocated on demand, passed and stored by reference. Eventhough
uvm_queue is a parameterized class extended from
uvm_object, it is not registered with the factory and hence invocation of
new() function is the correct way to create a queue object.
At times we might need to accept values from the command line to make our testbench and testcases more flexible. UVM provides this support via the
uvm_cmdline_processor singleton class. Generation of the data structures which hold the command line arguments happen during construction of the class object. A global variable called
uvm_cmdline_proc is created at initialization time which can be used to access command line options. Let's see more on how this feature can be used.
A few months ago, I was involved in writing a couple of tests that had to be run using RTL netlists with scan chains in them. Since this involved a lot of gate level signals, it was already cumbersome to debug. The idea was to enter the scan mode and shift out values in the chain and then be able to observe the value of a particular flop, after so many cycles at the output pad. So, there was a need to check if we got the right value at the pin after scan entry.
UVM has this nice feature of being able to print the line number and file name from where a reporting task is called. This is very helpful during the early days of testbench debug, but it can soon clutter the log reports. Just imagine having the file name occupy most of the screen space, true in most projects because of the long path to a file, only to make it difficult for you to find the actual report message. Good News ! There's a way to disable this.
One of the main features of UVM is the factory mechanism, and we already know how to use
`uvm_component_utils () and
`uvm_object_utils () within user-defined component and object classes. It's a way of registering our new component with the factory so that we can request the factory to return an object of some other type later on via
type_id::create () method. Let's see what happens behind the scene when the code is elaborated and compiled for the example that follows.
An agent is a hierarchical block which puts together other verification components that are dealing with a specific DUT interface. It usually contains a sequencer to generate data transactions, a driver to drive these transactions to the DUT, and a monitor that sits on the interface and tries to capture the pin wiggling that happens on the DUT interface. So, in a typical UVM environment there'll be multiple agents connected to various interfaces of the DUT. Sometimes, we do not want to drive anything to the DUT, but simply monitor the data produced by DUT. It would be nice to have a feature to turn the sequencer and driver of an agent ON and OFF when required.
Tests can be run in a UVM environment by either specifying the testname as an argument to
run_test() or as a command-line argument using
+UVM_TESTNAME="[test_name]". This can be considered an entry point to how UVM starts each component, configures and runs a simulation. There are a set of UVM core services within the structure, capable of providing instances to the factory and the root object. We'll see how the general flow looks like in the short explanation that follows.
Coverage metrics are widely used in SV/UVM verification to improve quality of the test suite and estimate the effort required to finish the verification task. They indicate how much of the design code has been exercised by existing set of tests, and provide an idea of how to write future tests that can target certain coverage holes. You can perform a code and functional coverage analysis after every regression to identify how many tests should be developed in order to target specific features of the design. Many times you'll find that in spite of trying every combination of input stimuli, there are certain pieces of code that simply does not get hit or exercised in simulation. You might have stumbled onto something called as unreachable code or dead code. As the name implies, it is part of the source code of a program or RTL that can never be executed because there is no control path to the code. Dead code can also be a piece of code that may be executed but does produce any effect on the output.