ma

How to customize default_hdl_checks/rules in CCD conformal constraint designer

Dear all,

I am using Conformal Constraint Designer (Version 17.1) to analyse a SystemVerilog based design.

While performing default HDL checks it finds  some violations (issues) in RTL and complains (warnings, etc) about RTL checks and others.

My questions:

Is there any directive which I can add to RTL (system Verilog) so that particular line of code or signal is ignored or not checked for HDL or RTL checks.

I can set ignore rules in rule manager (gui) but it does not seems effective if code line number changes or new signals are introduced.

What is the best way to customize default_hdl_rules ?

I will be grateful for your guidance.

Thanks for your time.




ma

Force cell equivalence between same-footprint and same-functionality hard-macros in Conformal LEC

For a netlist vs. netlist LEC flow we have to solve the following problem:

- in the RTL code we replicate a large array of N x M all-identical hard-macros, let call them MACRO_A

- MACRO_A is pre-assembled in Innovus and contains digital parts and analog parts (bottom-up hierarchical flow)

- at top-level (full-chip) we instantiate this array of all-identical macros

- in the top-level place-and-route flow we perform ecoChangeCell to remaster the top row of this array with MACRO_B

- MACRO_B is just a copy of the original MACRO_A cell containing same pins position, same internal digital functionality and also same digital layout, only slight differences in one analog block inside the macro

- MACRO_A and MACRO_B have the same .lib file generated with the do_extract_model command at the end of the Innovus flow, they only differ in the name of the macro

- when performing post-synthesis netlist vs post-place-and-route we load .lib files of both macros in Conformal LEC

- the LEC flow fails because Conformal LEC sees only MACRO_A instantiated in the post-synthesis netlist and both MACRO_A and MACRO_B in the post-palce-and-route netlist

Since both digital functionality and STD cells layout are the same between MACRO_A and MACRO_B we don't want to keep track of this difference already at RTL stage, we just want to perform this ECO change in place-and-route and force Conformal to assume equivalence between MACRO_A and MACRO_B .

Basically what I'm searching for is something similar to the add_instance_equivalences Conformal command but that works between Golden and Revised designs on cell primitives/black-boxes .

Is this flow supported ?

Thanks in advance

Luca




ma

map_to_mux

HI!

I want to use map_to_mux pragma for a particular logic in my code, which is in generate block. 

module xyz

parameter P_IN_WIDTH= 100

parameter P_OUT_WIDTH =100

parameter P_XXX = 1

parameter P_ZZZ=1

parameter P_IN_OFFSET_WD = 10

input [P_IN_WD-1:0]  in_data;

input [P_IN_OFFSET_WD-1:0] in_offset;

output [P_IN_WD-1:0]  out_data;

generate
if (P_XXX == 1)

// cadence infer_mux "MUX"
// cadence map_to_mux "MUX"

begin : XXX

assign c_out = (in_data >> in_offset*P_ZZZ);
end
else

// cadence infer_mux "MUX"
// cadence map_to_mux "MUX"
begin : YYY
assign c_out = (in_data << in_offset*P_ZZZ);
end
endgenerate

endmodule




ma

New Memory Estimator Helps Determine Amount of Memory Required for Large Harmonic Balance Simulations

Hi Folks, A question that I've often received from designers, "Is there a method to determine the amount of memory required before I submit a job? I use distributed processing and need to provide an estimate before submitting jobs." The answer...(read more)




ma

Distortion Summary in New CDNLive YouTube Video and at IEEE IMS2014 Next Week!

Hi Folks, Check out this great new video on YouTube: CDNLive SV 2014: PMC Improves Visibility and Performance with Spectre APS In this video from CDNLive Silicon Valley 2014, Jurgen Hissen, principal engineer, MSCAD, at PMC, discusses an aggressive...(read more)




ma

Cadence Presenting Four Spectre RF MicroApp Papers at IMS2016, May 22-27

Hello Spectre RF Users, Next week is my all time favorite technical conference - the International Microwave Symposium IMS2016 , May 22-27 in San Francisco, CA at the Moscone Center. If you're at the conference, please stop by the Cadence booth and...(read more)




ma

7 Habits of Highly Successful S-Parameters: How to Simulate Those Pesky S-Parameters in a Time Domain Simulator

Hello Spectre Users, Simulating S-parameters in a time domain (transient, periodic steady state) simulator has been and continues to be a challenge for many analog and RF designers. I'm often asked: What is required in order to achieve accurate...(read more)




ma

Link to: 7 Habits of Highly Successful S-Parameters: How to Simulate Those Pesky S-Parameters in a Time Domain Simulator

Hi All, If you were unable to attend IMS 2017 in June 2017, the IMS MicroApp “7 Habits of Highly Successful S-Parameters” is on our Cadence website. On Cadence Online Support , the in-depth AppNote is here: 20466646 . Best regards, Tawna...(read more)




ma

Multiple commands using ipcBeginProcess

Hi,

I am trying to use "sed -e 's " from SKILL code to edit unix file "FileA", to replace 3 words in the 2nd line.

How to run below multiple commands using  ipcBeginProcess, Should I use ipcWait or ipcCloseProcess ?

Using && to combine , will that work as I have to work serially on each command. ?

With below code only the first command gets executed. Please advise.

FileA="/user/tmp/text1.txt"

sprintf(Command1 "sed -e '2s/%s/%s/g' %s > %s" comment1 get(form concat("dComment" RDWn))->value FileA FileA)
cid = ipcBeginProcess(Command1)


sprintf(Command2 "sed -e '2s/%s/%s/g' %s > %s"  Time getCurrentTime() FileA FileA)
cid1 = ipcBeginProcess(Command2)


sprintf(Command3 "sed -e '2s/%s/%s/g' %s > %s"  comment2 get(form concat("Duser" RDWn))->value FileA FileA)
cid2 = ipcBeginProcess(Command3)

Thanks,

Ajay




ma

Get schematic to layout bound stdcells for array

I can get the bound stdcells using bndGetBoundObjects, but not get what each individual stdcell corresponds in layout.

Is there a way to get the layout bound stdcells of an array schematic symbol if the layout stdcell name do or do not match the symbol naming?

Once the schematic array stdcells are bound to the layout stdcells, how to get the correct terminal term~>name and net~>name?

Example of a schematic symbol and layout stdcell:

Schematic

INV<0:2>    instTerms~>terms~>name = ("vss" "vdd" "A" "Y")

                   instTerms~>net~>name = ("<*3>vss" "<*3>vdd" "in<0:2>" "nand2A,nand3B,nor2B")

Layout ( I know it is bad practice, but it happens )

stdcell1 instTerms~>terms~>name = ("vss" "vdd" "A" "Y")

             instTerms~>net~>name = ("vss" "vdd" "in<0>" "nand2A")

I23        instTerms~>terms~>name = ("vss" "vdd" "A" "Y")

             instTerms~>net~>name = ("vss" "vdd" "in<1>" "nand3B")

INV(2) instTerms~>terms~>name = ("vss" "vdd" "A" "Y")

             instTerms~>net~>name = ("vss" "vdd" "in<2>" "nor2B")

Paul




ma

How can I make a SKILL procedure not callable?

Inside the scope of isCallable there is code which I don't want to be executed.

The procedure named in isCallable to-day is callable.

I want to make that procedure so it cannot be called.  How do I do that?

I can't change the isCallable line or the scope.  I want to change its behavior by making sure that the procedure does not exist (obviously this would be done before the code is executed).




ma

Create the title & frame for view schematic

Hi all,

I want to write a script SKILL to create the title & frame for view schematic. My question is whether SKILL supports any function for me to do this.

Best regards,

Huy Hoang




ma

When Arm meets Intel – Overcoming the Challenges of Merging Architectures on an SoC to Enable Machine Learning

As the stakes for winning server segment market share grow ever higher an increasing number of companies are seeking to grasp the latest Holy Grail of multi-chip coherence. The approach promises to better enable applications such as machine learning...(read more)




ma

Celebrating Five Years of Performance-Optimized Arm-Based SoCs: Now including AMBA5

It’s been quite a long 5-year journey building and deploying Performance Analysis, Verification, and Debug capabilities for Arm-based SoCs. We worked with some of the smartest engineers on the planet. First with the engineers at Arm, with whom we...(read more)




ma

Mediatek Deploys Perspec for SoC Verification of Low Power Management (part 3 of 3)

Here we conclude the blog series and highlight the results of Mediatek 's use of Cadence Perspec™ System Verifier for their SoC level verification. In case you missed it, Part 1 of the blog is here , and Part 2 of the blog is here . One of their key...(read more)




ma

RAMAC Park and the Origin of the Disk Drive

Did you know that there is a park in San Jose named after a disk drive? Actually, technically it is named after the first computer that used disk drives. You couldn't just go and buy a disk drive...

[[ Click on the title to access the full blog on the Cadence Community site. ]]




ma

Specman’s Callback Coverage API

Our customers’ tests have become more complex, longer, and consume more resources than before. This increases the need to optimize the regression while not compromising on coverage. Some advanced...

[[ Click on the title to access the full blog on the Cadence Community site. ]]




ma

Library Characterization Tidbits: Recharacterize What Matters - Save Time!

Recently, I read an article about how failure is the stepping stone to success in life. It instantly struck a chord and a thought came zinging from nowhere about what happens to the failed arcs of a...

[[ Click on the title to access the full blog on the Cadence Community site. ]]




ma

Sunday Brunch Video for 3rd May 2020

www.youtube.com/watch Made on my balcony (camera Carey Guo) Monday: EDA101 Video Tuesday: Weekend Update Wednesday: RAMAC Park and the Origin of the Disk Drive Thursday: 1G Mobile: AMPS, TOPS, C-450,...

[[ Click on the title to access the full blog on the Cadence Community site. ]]




ma

BoardSurfers: Training Insights: Placing Parts Manually Using Design for Assembly (DFA) Rules

If I talk about my life, it was much simpler when I used to live with my parents. They took good care of whatever I wanted - in fact, they still do. But now, I am living alone, and sometimes I buy...

[[ Click on the title to access the full blog on the Cadence Community site. ]]




ma

Specman: Analyze Your Coverage with Python

In the former blog about Python and Specman: Specman: Python Is here!, we described the technical information around Specman-Python integration. Since Python provides so many easy to use existing libraries in various fields, it is very tempting to leverage these cool Python apps.

Coverage has always been the center of the verification methodology, however in the last few years it gets even more focus as people develop advanced utilities, usually using Machine Learning aids. Anyhow, any attempt to leverage your coverage usually starts with some analysis of the behavior and trends of some typical tests. Visualizing the data makes it easier to understand, analyze, and communicate. Fortunately, Python has many Visualization libraries.

In this blog, we show an example of how you can use the plotting Python library (matplotlib) to easily display coverage information during a run. In this blog, we use the Specman Coverage API to extract coverage data, and a Python module to display coverage grades interactively during a single run and the way to connect both.

Before we look at the example, if you have read the former blog about Specman and Python and were concerned about the fact that python3 is not supported, we are glad to update that in Specman 19.09, Python3 is now supported (in addition to Python2).

The Testcase
Let’s say I have a stable verification environment and I want to make it more efficient. For example: I want to check whether I can make the tests shorter while hardly harming the coverage. I am not sure exactly how to attack this task, so a good place to start is to visually analyze the behavior of the coverage on some typical test I chose. The first thing we need to do is to extract the coverage information of the interesting entities. This can be done using the old Coverage API. 

Coverage API
Coverage API is a simple interface to extract coverage information at a certain point. It is implemented through a predefined struct type named user_cover_struct. To use it, you need to do the following:

  1. Define a child of user_cover_structusing like inheritance (my_cover_struct below).
  2. Extend its relevant methods (in our example we extend only the end_group() method) and access the relevant members (you can read about the other available methods and members in cdnshelp).
  3. Create an instance of the user_cover_structchild and call the predefined scan_cover() method whenever you want to query the data (even in every cycle). Calling this method will result in calling the methods you extended in step 2.  

 The code example below demonstrates these three steps. We chose to extend the end_group() method and we keep the group grade in some local variable. Note that we divide it by 100,000,000 to get a number between 0 to 1 since the grade in this API is an integer from 0 to 100,000,000. 

 struct my_cover_struct like user_cover_struct {
      !cur_group_grade:real;
   
      //Here we extend user_cover_struct methods
      end_group() is also {
      cur_group_grade = group_grade/100000000;        
      }
};
 
extend sys{
      !cover_info : my_cover_struct;
       run() is also {
          start monitor_cover ();
     };
     
     monitor_cover() @any is {
         cover_info = new;
         
         while(TRUE) {
             // wait some delay, for example –
             wait [10000] * cycles;
          
            // scan the packet.packet_cover cover group
            compute cover_info.scan_cover("packet.packet_cover");
          };//while
      };// monitor_cover
};//sys

Pass the Data to a Python Module
After we have extracted the group grade, we need to pass the grade along with the cycle and the coverage group name (assuming there are a few) to a Python module. We will take a look at the Python module itself later. For now, we will first take a look at how to pass the information from the e code to Python. Note that in addition to passing the grade at certain points (addVal method), we need an initialization method (init_plot) with the number of cycles, so that the X axis can be drawn at the beginning, and end_plot() to mark interesting points on the plot at the end. But to begin with, let’s have empty methods on the Python side and make sure we can just call them from the e code.

 # plot_i.py
def init_plot(numCycles):
    print (numCycles)
def addVal(groupName,cycle,grade):
    print (groupName,cycle,grade)
def end_plot():
    print ("end_plot") 

And add the calls from e code:

struct my_cover_struct like user_cover_struct {
     @import_python(module_name="plot_i", python_name="addVal")
     addVal(groupName:string, cycle:int,grade:real) is imported;
  
     !cur_group_grade:real;
  
     //Here we extend user_cover_struct methods
     end_group() is also {
         cur_group_grade = group_grade/100000000;
         
        //Pass the values to the Python module
         addVal(group_name,sys.time, cur_group_grade);      
     }   //end_group
};//user_cover_struct
 
extend sys{
     @import_python(module_name="plot_i", python_name="init_plot"
     init_plot(numCycles:int) is imported;
    
     @import_python(module_name="plot_i", python_name="end_plot")
     end_plot() is imported;
    
     !cover_info : my_cover_struct;
     run() is also {
         start scenario();
    };
    
    scenario() @any is {
         //initialize the plot in python
         init_plot(numCycles);
        
         while(sys.time<numCycles)
        {
             //Here you add your logic     
             
            //get the current coverage information for packet
            cover_info = new;
            var num_items:=  cover_info.scan_cover("packet.packet_cover");
           
            //Here you add your logic       
        
        };//while
        
        //Finish the plot in python
        end_plot();
   
    }//scenario
}//sys
 
  • The green lines define the methods as they are called from the e
  • The blue lines are pre-defined annotations that state that the method in the following line is imported from Python and define the Python module and the name of the method in it.
  • The red lines are the calls to the Python methods.

 Before running this, note that you need to ensure that Specman finds the Python include and lib directories, and Python finds our Python module. To do this, you need to define a few environment variables: SPECMAN_PYTHON_INCLUDE_DIR, SPECMAN_PYTHON_LIB_DIR, and PYTHONPATH. 

 The Python Module to Draw the Plot
After we extracted the coverage information and ensured that we can pass it to a Python module, we need to display this data in the Python module. There are many code examples out there for drawing a graph with Python, especially with matplotlib. You can either accumulate the data and draw a graph at the end of the run or draw a graph interactively during the run itself- which is very useful especially for long runs.

Below is a code that draws the coverage grade of multiple groups interactively during the run and at the end of the run it prints circles around the maximum point and adds some text to it. I am new to Python so there might be better or simpler ways to do so, but it does the work. The cool thing is that there are so many examples to rely on that you can produce this kind of code very fast.

# plot_i.py
import matplotlib
import matplotlib.pyplot as plt
plt.style.use('bmh')
#set interactive mode
plt.ion()
fig = plt.figure(1)
ax = fig.add_subplot(111)
# Holds a specific cover group
class CGroup:
    def __init__(self, name, cycle,grade ):
        self.name = name
        self.XCycles=[]
        self.XCycles.append(cycle)
        self.YGrades=[]
        self.YGrades.append(grade)  
        self.line_Object= ax.plot(self.XCycles, self.YGrades,label=name)[-1]             
        self.firstMaxCycle=cycle
        self.firstMaxGrade=grade
    def add(self,cycle,grade):
        self.XCycles.append(cycle)
        self.YGrades.append(grade)
        if grade>self.firstMaxGrade:
            self.firstMaxGrade=grade
            self.firstMaxCycle=cycle          
        self.line_Object.set_xdata(self.XCycles)
        self.line_Object.set_ydata(self.YGrades)
        plt.legend(shadow=True)
        fig.canvas.draw()
     
#Holds all the data of all cover groups   
class CData:
    groupsList=[]
    def add (self,groupName,cycle,grade):
        found=0
        for group in self.groupsList:
            if groupName in group.name:
                group.add(cycle,grade)
                found=1
                break
        if found==0:
            obj=CGroup(groupName,cycle,grade)
            self.groupsList.append(obj)
     
    def drawFirstMaxGrade(self):
        for group in self.groupsList:
            left, right = plt.xlim()
            x=group.firstMaxCycle
            y=group.firstMaxGrade
           
            #draw arrow
            #ax.annotate("first maximum grade", xy=(x,y),
            #xytext=(right-50, 0.4),arrowprops=dict(facecolor='blue', shrink=0.05),)
           
            #mark the points on the plot
            plt.scatter(group.firstMaxCycle, group.firstMaxGrade,color=group.line_Object.get_color())
          
            #Add text next to the point   
            text='cycle:'+str(x)+' grade:'+str(y)   
            plt.text(x+3, y-0.1, text, fontsize=9,  bbox=dict(boxstyle='round4',color=group.line_Object.get_color()))                                                                      
       
#Global data
myData=CData()
 
#Initialize the plot, should be called once
def init_plot(numCycles):
    plt.xlabel('cycles')
    plt.ylabel('grade')   
    plt.title('Grade over time')  
    plt.ylim(0,1)
    plt.xlim(0,numCycles)
 
#Add values to the plot
def addVal(groupName,cycle,grade):
    myData.add(groupName,cycle,grade)
#Mark interesting points on the plot and keep it shown
def end_plot():
    plt.ioff();
    myData.drawFirstMaxGrade(); 
   
    #Make sure the plot is being shown
    plt.show();
#uncomment the following lines to run this script with simple example to make sure #it runs properly regardless of the Specman interaction
#init_plot(300)
#addVal("xx",1,0)
#addVal("yy",1,0)
#addVal("xx",50,0.3)
#addVal("yy",60,0.4)
#addVal("xx",100,0.8)
#addVal("xx",120,0.8)
#addVal("xx",180,0.8)
#addVal("yy",200,0.9)
#addVal("yy",210,0.9)
#addVal("yy",290,0.9)
#end_plot()
 

 In the example we used, we had two interesting entities: packet and state_machine, thus we had two equivalent coverage groups. When running our example connecting to the Python module, we get the following graph which is displayed interactively during the run.

 

    

 

When analyzing this specific example, we can see two things. First, packet gets to a high coverage quite fast and significant part of the run does not contribute to its coverage. On the other hand, something interesting happens relating to state_machine around cycle 700 which suddenly boosts its coverage. The next step would be to try to dump graphic information relating to other entities and see if something noticeable happens around cycle 700.

To run a complete example, you can download the files from: https://github.com/okirsh/Specman-Python/

Do you feel like analyzing the coverage behavior in your environment? We will be happy to hear about your outcomes and other usages of the Python interface.

Orit Kirshenberg
Specman team




ma

A Specman/e Syntax for Sublime Text 3

We're happy to have guest blogger Thorsten Dworzak, Principal Consultant at Verilab GmbH, describe how he added Specman/e syntax to Sublime Text 3:

According to the 2018 StackOverflow Developer Survey, the popularity of development environments (IDEs, Text Editors) among software developers shows the following ranking:

  1. Visual Studio Code 34.9%
  2. Visual Studio 34.3%
  3. Notepad++ 34.2%
  4. Sublime Text 28.9%
  5. Vim 25.8%
  6. IntelliJ 24.9%
  7. Android Studio 19.3%
  8. (DVT) Eclipse 18.9%
  1. Emacs 4.1%

Of these, only Vim, (DVT) Eclipse, and Emacs support editing in e-language (at least, last time I checked). Kate, which comes with KDE and also has a Specman mode, is not on this list.

I started using Sublime Text 3 some time ago. It offers packages that support a number of programming languages.

Though there is an e-language syntax available from Tsvi Mostovicz, it is unfinished work, and there are many syntactic constructs are missing. So, I created a fork of his project and finished it (it will eventually be merged back here).

It is a never-ending task because my code base for testing is limited and e is still undergoing development. The project is available through ST3's Package Control and you can contribute to it via Github.

I am eagerly waiting for your pull requests and/or comments and contributions!




ma

RAK Attack: Better Driver Tracing, Faster Palladium Build Time, UVM Register Map Automation

Looking to learn? There's a bunch of new RAKs (Rapid Adoption Kits) available online now!

1) Indago 19.09 Better Driver Tracing and More

Are you new to Indago and not sure where to start? Luckily, there’s a new Rapid Adoption Kit for you: the Indago 19.09 Overview RAK! This neat package contains everything you need to get your debugging started through Indago. In four short labs, plus a brief introductory lab, you’ll have all the basics of Indago 19.09 down—the Indago working environment, the SmartLog, how Indago interacts with the rest of the Cadence Verification Suite, and how Indago uses HDL driver tracing.

Lab 1 discusses the various debugging tools included in Indago and teaches you how to customize your Indago windows and environment settings. Lab 2 covers the SmartLog feature and talks about analyzing and filtering its messages to suit your needs, as well as how to interact with the waveform marker. Lab 3 is an interactive Indago debugging experience—it’ll walk you through how to use Indago and its features in an actual working environment: setting breakpoints, using simulator commands in the Indago console, toolbars, switches, and more. Lab 4 is all things HDL tracing—recording debug data, an introduction to debug assertions, waveform visualizations, driving expression analysis, and single-step driver tracing, among other things.

Interested? Check out the RAK here.

2) IXCOM MSIE: Faster Palladium Build Time

Got several testbenches you want to compile with the same DUT and tests and you want to do it fast? With IXCOM, all you have to do to compile those different testbenches is use the xrun command for each after compiling your DUT. But what exactly is IXCOM, and how does one start using it? This quick RAK can help—here, you’ll learn the basics of using MSIE features with IXCOM, complete with an example to get you started. Using MSIE can vastly improve your build times with Palladium and using IXCOM is the best way to shrink that tedious rebuild time as small as it can get. Check out this RAK here.

3)  JasperGold Control and Status Register Verification App Automates UVM Register Map Verification

New to the JasperGold Control and Status Register (CSR) Verification App for your UVM testbenches? Don’t worry; there’s a RAK for that! This eponymous RAK can get you up and running with this in no time, helping you automate your checks from UVM register map specs. With this RAK, you’ll learn the basics of the JasperGold CSR, how to use JasperGold CSR’s Proof Accelerator, and more. CSR features a model-based approach to predicting a register’s expected value, supports pipeline interfaces, all IP-XACT access policies, and it can fully model any expected register value. It also supports register aliases, read and write semantics, and separate read/write data latencies in any given field.

If this functionality sounds up your alley, you can take a look at this RAK here.




ma

Specman’s Callback Coverage API

Our customers’ tests have become more complex, longer, and consume more resources than before. This increases the need to optimize the regression while not compromising on coverage.

Some advanced customers of Specman use Machine Learning based solutions to optimize the regressions while some use simpler solutions. Based on a request of an advanced customer, we added a new Coverage API in Specman 19.09 called Coverage Callback. In 20.03, we have further enhanced this API by adding more options. Now there are two Coverage APIs that provide coverage information during the run (the old scan_cover API and this new Callback API). This blog presents these two APIs and compares between them while focusing on the newer one.

Before we get into the specifics of each API, let’s discuss what is common between these APIs and why we need them. Typically, people observe the coverage model after the test ends, and get to know the overall contribution of the test to the coverage. With these two APIs, you can observe the coverage model during the test, and hence, get more insight into the test progress.

Are you wondering about what you can do with this information? Let’s look at some examples.

  1. Recognize cases when the test continues to run long after it already reached its coverage goal.
  2. View the performance of the coverage curve. If a test is “stuck” at the same grade for a long time, this might indicate that the test is not very good and is just a waste of resource.

These analyses can be performed in the test itself, and then a test can decide to either stop the run, or change something in it configuration, or – post run. You can also present them visually for some analysis, as shown in the blog: Analyze Your Coverage with Python.

scan_cover API (or “Scanning the Coverage Model”)

With this API you can get the current status for any cover group or item you are interested in at any point in time during the test (by calling scan_cover()). It is very simple to use; however it has performance penalty. For getting the coverage grade of any cover group during the test, you should
1. Trigger the scan_cover at any time when you want the coverage model to be scanned.
2. Implement the scan_cover related methods, such as start_item() and end_bucket(). In these methods, you can query the current grade of group/item/bucket.
The blog mentioned earlier: Analyze Your Coverage with Python describes the details and provides an example.

Callback API

The Callback API enables you to get a callback for a desired cover group(s), whenever it is sampled. This API also provides various query methods for getting coverage related information such as what the current sampled value is. So, in essence, it is similar to scan_cover API, but as the phrase says: “same same but different”:

  1. Callback API has almost no performance penalty while scan_cover API does.
  2. Callback API contains a richer set of query methods that provide a lot of information about the current sampled value (vs just the grade with scan_cover).
  3. Using scan_cover API, you decide when you want to query the coverage information (you call scan_cover), while with the Callback API you query the coverage information when the coverage is sampled (from do_callback). So, scan_cover gives you more flexibility, but you do need to find the right timing for this call.

There is no absolute advantage of either of these APIs, this only depends on what you want to do.  

Callback API details

The Callback API is based on a predefined struct called: cover_sampling_callback. In order to use this API, you need to:

  1. Define a struct inheriting cover_sampling_callback (cover_cb_save_data below)
    1. Extend the predefined do_callback() method. This method is a hook being called whenever any of the cover groups that are registered to the cover_sampling_callback instance is being sampled.
    2. From do_callback() you can access coverage data by using queries such as: is_currently_per_type(), get_current_group_grade() and get_current_cover_group() (as in the example below) and many more such as: get_relevant_group_layers() and get_simple_cross_sampled_bucket_name().
  2. Register the desired cover group(s) to this struct instance using the register() method.

Take a look at the following code:

// Define a coverage callback.
// Its behavior – print to screen the current grade.
struct cover_cb_save_data like cover_sampling_callback {
    do_callback() is only {
       // In this example, we care only about the per_type grade, and not per_instance
       if is_currently_per_type() {           
            var cur_grade : real = get_current_group_grade();
            sys.save_data (get_current_cover_group().get_name(), cur_grade);
        };//if
    };//do_callback()
};// cover_cb_send_data


extend sys {
    !cb : cover_cb_save_data;

   // Instantiate the coverage callback, and register to it two of my coverage groups
    run() is also {
       cb  = new  with {
        var gr1:=rf_manager.get_struct_by_name("packet").get_cover_group("packet_cover");
        .register(gr1);
        var gr2:=rf_manager.get_struct_by_name("sys").get_cover_group("mem_cover");
       .register(gr2);
       };//new  
    };//run()

  save_data(group_name : string, group_grade : real) is {
        //here you either print the values to the screen, update a graph you show or save to a db 
  };// save_data
};//sys

In the blog Analyze Your Coverage with Python mentioned above, we show an example of how you can use the scan_cover API to extract coverage information during the run, and then use the Specman-Python API to display the coverage interactively during the run (using plotting Python library - matplotlib). If you find this usage interesting and you want to use the same example, by the Callback API instead of the scan_cover API, you can download the full example from GIT from here: https://github.com/efratcdn/cover_callback.

Specman Team

 

 




ma

BoardSurfers: Allegro In-Design IR Drop Analysis: Essential for Optimal Power Delivery Design

All PCB designers know the importance of proper power delivery for successful board design. Integrated circuits need the power to turn on, and ICs with marginal power delivery will not operate reliably. Since power planes can...(read more)




ma

BoardSurfers: Training Insights: Loading SKILL Programs Automatically

Imagine you are on a vacation with your family, and suddenly, your phone starts buzzing. You pick it up and what are you looking at is a bunch of pending, unanswered e-mails. You start recollecting the checklist you had made before taking off only to realize that you haven’t put on the automatic replies! (read more)




ma

BoardSurfers: Training Insights: Placing Parts Manually Using Design for Assembly (DFA) Rules

So, what if you can figure out all that can go wrong when your product is being assembled early on? Not guess but know and correct at an early stage – not wait for the fabricator or manufacturer to send you a long report of what needs to change. That’s why Design for Assembly (DFA) rules(read more)



  • Allegro PCB Editor

ma

Ultra Low Power Benchmarking: Is Apples-to-Apples Feasible?

I noticed some very interesting news last week, widely reported in the technical press, and you can find the source press release here. In a nutshell, the Embedded Microprocessor Benchmark Consortium (EEMBC) has formed a group to look at benchmarks for ultra low power microcontrollers. Initially chaired by Horst Diewald, chief architect of MSP430TM microcontrollers at Texas Instruments, the group's line-up is an impressive "who's who" of the microcontroller space, including Analog Devices, ARM, Atmel, Cypress, Energy Micro, Freescale, Fujitsu, Microchip, Renesas, Silicon Labs, STMicro, and TI.

As the press release explains, unlike usual processor benchmark suites which focus on performance, the ULP benchmark will focus on measuring the energy consumed by microcontrollers running various computational workloads over an extended time period. The benchmarking methodology will allow the microcontrollers to enter into their idle or sleep modes during the majority of time when they are not executing code, thereby simulating a real-world environment where products must support battery life measured in months, years, and even decades.

Processor performance benchmarks seem to be as widely criticized as EPA fuel consumption figures for cars - and the criticism is somewhat related. There is a suspicion that manufacturers can tune the performance for better test results, rather than better real-world performance. On the face of it, the task to produce meaningful ultra low power benchmarks seems even more fraught with difficulties. For a start, there is a vast range of possible energy profiles - different ways that computing is spread over time - and a plethora of low power design techniques available to optimize the system for the set of profiles that particular embedded system is likely to experience. Furthermore, you could argue that, compared with performance in a computer system, energy consumption in an ultra low power embedded system has less to do with the controller itself and more to do with other parts of the system like the memories and mixed-signal real-world interfaces.

EEMBC cites that common methods to gauge energy efficiency are lacking in growth applications such as portable medical devices, security systems, building automation, smart metering, and also applications using energy harvesting devices. At Cadence, we are seeing huge growth in these areas which, along with intelligence being introduced into all kinds of previously "dumb" appliances, is becoming known as the "Internet of Things." Despite the difficulties, with which the parties involved are all deeply familiar, I applaud this initiative. While it may be difficult to get to apples-to-apples comparisons for energy consumption in these applications, most of the time today we don't even know where the grocery store is. If the EEMBC effort at least gets us to the produce department, we're going to be better off.

Pete Hardee 

 




ma

cadence ADE EXPLORER vs MAESTRO

Hello, i saw that MAESTRO is a plotting addon is it a part of ADE EXPLORER?

i cant see the relation between the two.i started to read manual and regarding MAESTRO i only see code.

is there some simple examples?
Thanks.




ma

How to install PLL Macro Model Wizard?

Hello,

I am using virtuoso version IC 6.1.7-64b.500.1, and I am trying to follow the Spectre RF Workshop-Noise-Aware PLL Design Flow(MMSIM 7.1.1) pdf.

I could find the workshop library "pllMMLib", but I cannot find PLL Macro Model Wizard, and I attached my screen.

Could you please help me install the module "PLL Macro Model Wizard"?

Thanks a lot!




ma

matching network problem in cadence virtuoso

Hello, i have built a matching network of 13dB gain and  NF as shown bellow step by step.(including all the plots and matlab )

its just not working at all,i am doing it exacly by the thoery

taking a point inside the circle-> converting its gamma to Z_source->converting gamma_s into gamma_L with the formulla bellow as shown in the matlab->converting the gamma_L into Z_L-> building the matching network for conjugate of Z_L and Z_c.Its just not working.

where did i got  wrong?

Thanks.

gamma_s=75.8966*exp(deg2rad(280.88)*i);
z_s=gamma2z(gamma_s,50);
s11=0.99875-0.03202*i
s12=721.33*10^(-6)+8.622*10^(-3)*i
s21=-188.37*10^(-3)+30.611*10^(-3)*i
s22=875.51*10^(-3)-100.72*10^(-3)*i
gamma_L=conj((s22+(s12*s21*gamma_s)/(1-s11*gamma_s)))
z_L=gamma2z(gamma_L,50)




ma

commands that was performed by GUI

hello there, i'm a student studying allegro PCB designer.

There are some commands that i can do with GUI, but i want to know what kind of commands i used so that i can route with commands only(ex) skill).

Is there any file that i can see what kind of commands i used something like log files or command history?

thank you for reading this long boring question.




ma

E- (SPMHDB-187): SHAPE boundary may not cross itself.

Hi experts,

I have a problem with my design as below

ERROR: in SHAPE (-2.3622 2.3622)

  class = ETCH
  subclass = TOP 
  Part of Symbol Def SHAPE_4725X4725.
      Which is part of a padstack as a SHAPE symbol.
  ERROR(SPMHDB-187): SHAPE boundary may not cross itself.
   Error cannot be fixed.
       Object has first point location at (-2.3622 2.3622).

Can you tell me how to solve my problem?

Thanks a lot.




ma

VManager wrongly imports failed test as passed

Hello,
I'm exploring VManager tool capabilities.

I launched a simulation with xrun, which terminates with a fatal error (`uvm_fatal actually).

Then I imported the flow session, through VManager -> Regression -> Collect Runs, linking the directory with ucm and ucd of just failed run.

VManager imports the test with following attributes:

Total Runs =1

#Passed =1

#Failed =0

What I'm missing here? It should be imported as failed test.

If I right click on flow name and choose Analyze All Runs, VManager brings me to Analysis tab and I can see only a PASSED tag in Runs subwindow.

Thank you for any help




ma

Simvision Schematic Information

Hi all,

I would like to understand if it is possible from Simvision to get the information regarding the view of a block. In principle using the Schematic Tracer Simvision is able to find the information about the config of that particular model, but I did not found a command for describing the nature of the module (for example if it is schematic or rtl or real model...)

Any functions that I can use for this purpose?

Many thanks




ma

Running xrun command in vsif file

Hi,

I found a basic Specman E/Verilog program at http://www.asic-world.com/examples/specman/memory.html and I would like to run it through a vsif file, with vManager.

I'm able to run it, without problems, with this command : xrun -Q -unbuffered '-timescale' '1ns/1ns' '-access' '+rw' memory_tb.v mem_tb_top.e test_write_read_all.e.

I wrote a first vsif which look like this:

---- vm_basic.vsif -----

session vm_basic {
        top_dir : /home/cadence/xrunTest/;
        output_mode: terminal;
};

group basic {
        test test {
                run_script: xrun -Q -unbuffered '-timescale' '1ns/1ns' '-access' '+rw' memory_tb.v mem_tb_top.e test_write_read_all.e
        };
};

----------------------------

This solution didn't work due to the prompt change with xrun, and I have no clue how to manage this issue.

Have you any idea?

Best regards,

Yohan




ma

How to remove sessions from vManager without deleting them

I am importing sessions which are run by other people to analyse and I would like to remove them from my vManager Regressions tab as they become obsolete. As I am not the original person who run the sims, I cannot "delete" sessions. What are my options? Thanks.




ma

search for glob/regexp in specman loaded modules?

Specman *search* command allows searching in all loaded modules, but only for a string.

Is there a way to search for a regexp or glob?

Alternatively, is there a way to simply get a list of all loaded files somehow? Then I could use either the "shell" command, or real shell together with grep.

Thanks




ma

How to get product to license feature mapping information?

When I run simulation with irun, it may use may license features. How can I know which feature(s) a product use? I get below message in cdnshelp:

-------------------------------------------------------------

Which Products Are in the License File?


One Cadence product can require more than one license (FEATURE). The product to feature mapping in the license file lists the licenses each product needs.


For example, if the license file lists these features for the NC-VHDL Simulator:


Product Name: Cadence(R) NC-VHDL Simulator
#
Type: Floating Exp Date: 31-jul-2006 Qty: 1
#
Feature: NC_VHDL_Simulator [Version: 9999.999]
#
Feature: Affirma_sim_analysis_env [Version: 9999.999]

-------------------------------------------------------------------

But, in my license file, I can't find such info. There is only "FEATURE" lines in my license file. How can I get product to feature mapping info?

Thanks!




ma

IC Packagers: Design Element Label Management

  A few weeks ago, we talked about template text labels for design-specific information. There, we were focused on labels that are specific to the design as a whole: revision information, dates, authors, etc. Today, we’re looking at a diff...(read more)



  • Allegro Package Designer
  • Allegro PCB Editor

ma

IC Packagers: A New Option in Bond Finger Solder Mask Openings

If you design wire bond packages, you’re familiar with the need for the bond fingers and rings on the package substrate layers to be exposed through the solder mask layer. If they aren’t, it becomes… rather difficult… to bon...(read more)



  • Allegro Package Designer

ma

S2P file format

How to generate the S2P file format from the power SI tool?




ma

allegro schematic Hierarchy

Hello, I need to ask a question regarding hierarchy in allegro schematic. I have created one but now i need to make changes to one section so that its not reflected into the other?




ma

Capture Constraint Man anger

Is anyone else using Constraint Manager within Capture? This is my first time using it. I'm finding that it is occasionally changing some of my constraint values in Allegro. It seems random. 




ma

Placement by Schematic Page Problem (Not Displaying All Page)

I am using PCB Editor v17.2-2016.

I tried to do placement by schematic page but not all pages are displayed.

Earlier, I successfully do the placement by schematic pages and it was showing all the pages. But then I decided to delete all placed components and to do placement again.

When I try to do placement by schematic page again, I noticed that only the pages that I have successfully do all the placement previously are missing.




ma

Error: CMFBC-1 The schematic and the layout constraints were not synchronized

Hi, I am in the middle of a design and had no problem going back and forth between schematics and layout. Now I am getting the error message below. I am using Cadence 17.2.

ERROR: Layout database has probably been reverted to an earlier version than that, which was used in the latest flow or the schematic database was synchronized with another board.

The basecopy file generated by the last back-to-front flow not found.

ERROR: Layout database has probably been reverted to an earlier version than that, which was used in the latest flow or the schematic database was synchronized with another board.

The basecopy file generated by the last back-to-front flow not found.

Error: CMFBC-1: The schematic and the layout constraints were not synchronized as the changes done since the last sync up could not be reconciled. Syncing the current version of the schematic or layout databases with a previous version would result in this issue. The  constraint difference report is displayed.

Continuing with "changes-only" processing may result in incorrect constraint updates.

Thanks for your input

Claudia




ma

Soldermask and Pastemask Layers

Hi All,

I've just about to finish my first PCB layout, and I want to understand some 2 issues better:

1. Soldermask layer: when exactly do we want to define for some SMT pad (say at Pad Designer tool), to have soldermask top and when we want to define solermask bottom

if it's a TH pad, I guess we always want to define both layers soldermask (top, bottom), because the pads are crossing all the layers. However, if it's a SMT pad, which SM we want?

2. Pastemask layer: is this layer necessary for the gerber files generation, when we have SMT components in our circuit?

And again, when we define for a TH/SMT pads pastemask top, and when Pastemask bottom?

Thanks!




ma

ce_tools directory no longer shipped with Specman

Hello All,

starting with version 8.1 the contents of the ce_tools directory will no longer
be shipped with Specman. The directory contains some unsupported AE/R&D
ware and has not been updated for several releases (i.e. most of those old
packages don't work with the latest release).
 
Attached is the contents of this directory. Please read the README before
using any of the packages.


Regards,
-hannes


Originally posted in cdnusers.org by hannes




ma

Specman Makefile generator utility

I've heard lots of people asking for a way to generate Makefiles for Specman code, and it seems there are some who don't use "irun" which takes care of this automatically. So I wrote this little utility to build a basic Makefile based on the compiled and loaded e code.

It's really easy to use: at any time load the snmakedeps.e into Specman, and use "write makefile <name> [-ignore_test]".
This will dump a Makefile with a set of variables corresponding to the loaded packages, and targets to build any compiled modules.
Using -ignore_test will avoid having the test file in the Makefile, in case you switch tests often (who doesn't?).

It also writes a stub target so you can do "make stub_ncvlog" or "make stub vhdl" etc.

The targets are pretty basic, I thought it was more useful to #include this into the main Makefile and define your own more complex targets / dependencies as required.

The package uses the "reflection" facility of the e language, which is now documented since Specman 8.1, so you can extend this utility if you want (please share any enhancements you make).

 Enjoy! :-)

Steve.




ma

e-code: Macro example code for Team Specman blog post

Hi everybody,

 

The attached package is a tiny code example with a demo for an upcoming Team Specman blog post about writing macros.

 

Hilmar