era

China’s Military Is Tied to Debilitating New Cyberattack Tool

An Israeli security company said the hacking software, called Aria-body, had been deployed against governments and state-owned companies in Australia and Southeast Asia.




era

Lessons to be learned from cholera | letters

Brian Waller questions the lack of political will when it comes to preventable deaths across Asia and sub-Saharan Africa, while Tony Haynes reveals how artists can explore attitudes to disease

Neil Singh’s powerful long read (Cholera and coronavirus: why we must not repeat the same mistakes, 1 May) tellingly compares the way in which the world is reacting to Covid-19 with how it has handled cholera, especially in developing countries. He states: “There is no biological or environmental reason why cholera can’t be eradicated … It is not the knowhow that is lacking, but rather the political will.”

Exactly the same conclusion can be reached in respect of the 5 million-plus children under five who are dying every year. According to the World Health Organization, many of these early child deaths are preventable or can be easily treated, but there is nothing remotely like the effort being put into this as in the response to Covid-19. Might the reason for that inaction be that more than 80% of these deaths involve children in central and south Asia, and sub-Saharan Africa?
Brian Waller
Otley, North Yorkshire

Continue reading...




era

El coronavirus ataca las cárceles y cientos de miles de presos son liberados

El virus se ha propagado rápidamente en prisiones sobrepobladas en el mundo, lo que ha llevado a los gobiernos a liberar a los reclusos en masa.




era

Players seek federal inquiry of Georgia shooting

The Players Coalition and dozens of professional athletes sent a letter to Attorney General William Barr and FBI Director Christopher Wray requesting an immediate federal investigation into the death of Ahmaud Arbery in Georgia.




era

The Desperate Passion of Ben Foster

I could barely recognize Ben Foster in 3:10 to Yuma, but I was blown away just the same by him as in his star making turn from Hostage. What makes Foster so special in Yuma?

Yuma contains two of Hollywood’s finest: Russell Crowe and Christian Bale. Bale is excellent, Crowe a little too relaxed to be cock-sure-dangerous. Both are unable to provide the powder-keg relationship that the movie demands.

Into this void steps Ben Foster. He plays Charlie Prince, sidekick to Crowe’s dangerous and celebrated outlaw Ben Wade. When Wade is captured, Prince is infuriated. He initiates an effort suffused with desperate passion to rescue his boss.

Playing Prince with a mildly effeminate gait, Foster quickly becomes the movie’s beating heart. What struck me in particular was that Foster was able to balance method acting with just plain good acting. He plays his character organically but isn’t above drawing attention with controlled staginess.

Gradually, Foster’s willingness to control a scene blend in with that of Prince’s. Is the character manipulating his circumstances in the movie or is it the actor playing a fine hand? Foster is so entertaining, the answer is immaterial.

Rave Out © 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




era

Extrowords #102: Generalissimo 73

Sample clues

5 across: The US president’s bird (3,5,3)

11 down: Group once known as the Quarrymen (7)

10 across: Cavalry sword (5)

19 across: Masonic ritual (5,6)

1 down: Pioneer of Ostpolitik (6)

Extrowords © 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




era

Extrowords #103: Generalissimo 74

Sample clues

14 across: FDR’s baby (3,4)

1 down: A glitch in the Matrix? (4,2)

4 down: Slanted character (6)

5 down: New Year’s venue in New York (5,6)

16 down: Atmosphere of melancholy (5)

Extrowords © 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




era

Extrowords #104: Generalissimo 74

Sample clues

6 across: Alejandro González Iñárritu’s breakthrough film (6,6)

19 across: Soft leather shoe (8)

7 down: Randroids, for example (12)

12 down: First American World Chess Champion (7)

17 down: Circle of influence (5)

Extrowords © 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




era

Extrowords #105: Generalissimo 75

Sample clues

5 across: Robbie Robertson song about Richard Manuel (6,5)

2 down: F5 on a keyboard (7)

10 across: Lionel Richie hit (5)

3 down: ALTAIR, for example (5)

16 down: The problem with Florida 2000 (5)

Extrowords © 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




era

Extrowords #106: Generalissimo 76

Sample clues

9 across: Van Morrison classic from Moondance (7)

6 down: Order beginning with ‘A’ (12)

6 across: Fatal weakness (8,4)

19 across: Rolling Stones classic (12)

4 down: Massacre tool (8)

Extrowords © 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




era

Cadence Genus Synthesis Solution – the Next Generation of RTL Synthesis

Physical synthesis has been around in various forms for many years. The basic idea is to bring some awareness of physical layout into synthesis. This week (June 3, 2015) Cadence is rolling out the Genus™ Synthesis Solution, a next-generation RTL synthesis tool that takes physical awareness in some new directions.

Here are four important things to know about Genus technology:

  • A massively parallel architecture improves turnaround time by up to 5X while maintaining quality of results
  • The Genus solution synthesizes up to 10M+ instances flat without impacting power, performance and area (PPA)
  • The Genus solution provides tight correlation with the Innovus Implementation System, using the same placement and routing algorithms
  • Globally focused PPA optimization saves up to 20% datapath area and power

Compared to previous-generation products such as the Cadence Encounter RTL Compiler Advanced Physical Option, the Genus solution approaches physical synthesis in a different way. The Encounter solution applied physical optimization “at the tail end of synthesis,” said David Stratman, senior principal product manager at Cadence. “We were doing a final incremental push, but we could only do so much, since we had locked in a lot of the earlier steps from a logical-only synthesis perspective.”

Genus Synthesis Solution supports the physical synthesis features in the previous Encounter solution, but it also brings the full physical scope upstream to RTL logic designers. “It’s going to enable the unit-level RTL designer to gain the benefits of physical synthesis without having to understand it,” Stratman said. As an example, users can apply generic (unmapped) placement at the earliest stages of synthesis, using a lightweight version of the Innovus placement engine. The bottom line: “Genus is a full solution where every step of synthesis can be done physically.”

Getting Massively Parallel

If you bring physical data into synthesis, you need a way to improve capacity and runtimes, especially with today’s gigantic advance-node SoCs. That’s why a massively parallel architecture is the cornerstone of the Genus solution. In this way, the Genus solution is following in the footsteps of the Innovus Implementation System, which also provides a massively parallel architecture.

Both the Innovus and Genus solutions can handle blocks of 10M instances flat. Given that SoCs today may have up to 100M instances, and often up to 50-100 top-level blocks, this is an important capability. Many tools today will only handle blocks of 1M instances. As a result, design teams often have to constrain block sizes.

Genus technology offers timing-driven, multi-level design partitioning across multiple threads and machines. It enables a near-linear runtime scaling without impacting PPA. According to Stratman, the Genus solution will scale well beyond 64 CPUs for a large design, with a “sweet spot” around 8-20 CPUs for today’s typical block sizes. Runs that used to take days, he noted, can now be done in hours.

As shown below, Genus technology leverages parallelism at three levels. The Genus solution can distribute design partitions to multiple threads or CPUs, and also supports local algorithm-level multithreading on each machine with shared memory. An adaptive scheduler ensures the best use of the available CPUs.


Fig. 1 – Genus Synthesis Solution provides three levels of parallelism

With its massive parallelism, Stratman said, Genus technology can obtain production-level quality of results (QoR) in runtimes typically seen in “prototype-level” synthesis runs. The “secret sauce,” he said, is in the partitioning. Cadence has found a way to generate partitions in a way that “slices the design more intelligently, and takes advantage of the Genus database to merge partitions without losing timing, power, or area,” Stratman said.

Playing in the Sandbox

In the Genus Synthesis Solution, a process called “sandboxing” allows any subset or partition of a design to be extracted along with full timing and a physical context. Optimization algorithms will treat a sandbox as a complete design.

The “Clipper” flow clips out or extracts the context of the larger SoC blocks. “It’s kind of a skeleton floorplan but it has all the timing information,” Stratman said. These extracted contexts include all the critical physical information to make the right RTL synthesis choices at the unit level. This information is used to streamline the handoffs between unit-level RTL designers, integration engineers, and implementation engineers. It’s a way for logic designers to gain some physical knowledge without having to be a physical synthesis expert, or without having to run a full top-level synthesis.

Fig. 2 – Clipper flow provides context for unit-level blocks

Correlation with Innovus Implementation System

Although Genus technology can work with third-party IC implementation systems, it shares algorithms and engines with Innovus Implementation System, as well as a common user interface. As shown below, both the Genus and Innovus solutions use a table-based Quantus QRC parasitic extraction, effective current source model (ECSM) and composite current source (CCS) delay calculations, and a unified global routing engine. Timing and wire length claim a 5% correlation.

Fig. 3 – Genus Synthesis Solution offers tight correlation with Innovus Implementation System

Genus technology doesn’t model everything to the same level of accuracy as the Innovus solution, however. “We chose to be lighter weight and more nimble to get expected runtimes,” Stratman said. A tight correlation is possible because the Genus and Innovus solutions use a similar code base. This correlation will be tighter than that between Encounter RTL Compiler Advanced Physical Option and the Encounter Digital Implementation System today.

Genus Synthesis Solution uses a new Hybrid Global Router that provides the ability to resolve congestion and construct layer-aware, timing-driven wire topologies. This accelerates analysis and debug, and reduces iterations. Users can avoid blockages and see a full Manhattan route as opposed to “flight lines.” Layer awareness is particularly important, given the large RC variations within the metal stack at advanced process nodes.

A version of the Innovus GigaPlace engine is available within the Genus solution. Here, users can do an RTL-level generic gate placement early in the synthesis flow (“generic gate” means there is no mapping into standard cell libraries, but there’s still an area estimate). This helps designers understand PPA tradeoffs earlier.

While users can go all the way to a design-rule “legal” placement with Genus Synthesis Solution, this isn’t generally recommended. “You can do a placement and use the same algorithms as GigaPlace and get a nice correlation without all the runtimes and additional steps of doing a fully legal placement,” Stratman said.

So where does Genus technology end and Innovus technology begin? That’s up to the user. You could use the Genus solution for logical synthesis and run all physical implementation in the Innovus system. If you run physical synthesis within the Genus solution, there’s more work earlier in the flow, but you get better insights into downstream problems and reduce iterations.

“Physical synthesis should be no more than 2X [runtime] of logic synthesis,” Stratman said. “All of the runtime that moves up should be shaved off of the place-and-route stages, because now you can do lightweight incremental optimization and incremental placement. The overall flow should be runtime neutral or better.”

Be Globally Aware

Finally, Genus Synthesis Solution offers a globally focused early PPA optimization across the whole datapath, delivering up to a 20% area reduction in the datapath. Stratman noted that this capability is a follow-on to an RCP feature called “globally focused mapping” that can determine the best cells to use in a library. What’s new with the Genus solution is that this concept has been applied at the arithmetic level.

For example, there are many ways to configure a multiplier – you may want to prioritize speed, power, or size. In the past, Stratman noted, synthesis tools have not been very good at globally optimizing the architecture selection for PPA optimization. “We can [now] find the most efficient global datapath implementation for a given region,” he said.

For further information about the Cadence Genus Synthesis Solution, including a datasheet and technical product brief, see this landing page.

Richard Goering

Related Blog Posts

Designer View – RTL Synthesis Success Strategies at 28nm and Below

Front-End Design Summit: The Future of RTL Synthesis and Design for Test

Physically-Aware Synthesis Helps Design a New Computer Architecture

 




era

DAC 2015 Accellera Panel: Why Standards are Needed for Internet of Things (IoT)

Design and verification standards are critical if we want to get a new generation of Internet of Things (IoT) devices into the market, according to panelists at an Accellera Systems Initiative breakfast at the Design Automation Conference (DAC 2015) June 9. However, IoT devices for different vertical markets pose very different challenges and requirements, making the standards picture extremely complicated.

The panel was titled “Design and Verification Standards in the Era of IoT.” It was moderated by industry editor John Blyler, CEO of JB Systems Media and Technology. Panelists were as follows, shown left to right in the photo below:

  • Lu Dai, director of engineering, Qualcomm
  • Wael William Diab, senior director for strategy marketing, industry development and standardization, Huawei
  • Chris Rowen, CTO, IP Group, Cadence Design Systems, Inc.

 

In opening remarks, Blyler recalled a conversation from the recent IEEE International Microwave Symposium in which a panelist pointed to the networking and application layers as the key problem areas for RF and wireless standardization. Similarly, in the IoT space, we need to look “higher up” at the systems level and consider both software and hardware development, Blyler said.

Rowen helped set some context for the discussion by noting three important points about IoT:

  • IoT is not a product segment. Vertical product segments such as automotive, medical devices, and home automation all have very different characteristics.
  • IoT “devices” are components within a hierarchy of systems that includes sensors, applications, user interface, gateway application (such as cell phone), and finally the cloud, where all data is aggregated.
  • A bifurcation is taking place in design. We are going from extreme scale SoCs to “extreme fit” SoCs that are specialized, low energy, and very low cost.

Here are some of the questions and answers that were addressed during the panel discussion.

Q: The claim was recently made that given the level of interaction between sensors and gateways, 50X more verification nodes would have to be checked for IoT. What standards need to be enhanced or changed to accomplish that?

Rowen: That’s a huge number of design dimensions, and the way you attack a problem of that scale is by modularization. You define areas that are protected and encapsulated by standards, and you prove that individual elements will be compliant with that interface. We will see that many interesting problems will be in the software layers.

Q: Why is standardization so important for IoT?

Dai: A company that is trying to make a lot of chips has to deal with a variety of standards. If you have to deal with hundreds of standards, it’s a big bottleneck for bringing your products to market. If you have good standardization within the development process of the IC, that helps time to market.

When I first joined Qualcomm a few years ago, there was no internal verification methodology. When we had a new hire, it took months to ramp up on our internal methodology to become effective. Then came UVM [Universal Verification Methodology], and as UVM became standard, we reduced our ramp-up time tremendously. We’ve seen good engineers ramp up within days.

Diab: When we start to look at standards, we have to do a better job of understanding how they’re all going to play with each other. I don’t think one set of standards can solve the IoT problem. Some standards can grow vertically in markets like industrial, and other standards are getting more horizontal. Security is very important and is probably one thing that goes horizontally.

Requirements for verticals may be different, but processing capability, latency, bandwidth, and messaging capability are common [horizontal] concerns. I think a lot of standards organizations this year will work on horizontal slices [of IoT].

Q: IoT interoperability is important. Any suggestions for getting that done and moving forward?

Rowen: The interoperability problem is that many of these [IoT] devices are wireless. Wireless is interesting because it is really hard – it’s not like a USB plug. Wireless lacks the infrastructure that exists today around wired standards. If we do things in a heavily wireless way, there will be major barriers to overcome.

Dai: There are different standards for 4G LTE technology for different [geographical] markets. We have to make a chip that can work for 20 or 30 wireless technologies, and the cost for that is tremendous. The U.S., Europe, and China all have different tweaks. A good standard that works across the globe would reduce the cost a lot.

Q: If we’re talking about the need to define requirements, a good example to look at is power. Certainly you have UPF [Unified Power Format] for the chip, board, and module.

Rowen: There is certainly a big role for standards about power management. But there is also a domain in which we’re woefully under-equipped, and that is the ability to accurately model the different power usage scenarios at the applications level. Too often power devolves into something that runs over thousands of cycles to confirm that you can switch between power management levels successfully. That’s important, but it tells you very little about how much power your system is going to dissipate.

Dai: There are products that claim to be UPF compliant, but my biggest problem with my most recent chip was still with UPF. These tools are not necessarily 100% UPF compliant.

One other concern I have is that I cannot get one simulator to pass my Verilog code and then go to another that will pass. Even though we have a lot of tools, there is no certification process for a language standard.

Q: When we create a standard, does there need to be a companion compliance test?

Rowen: I think compliance is important. Compliance is being able to prove that you followed what you said you would follow. It also plays into functional safety requirements, where you need to prove you adhered to the flow.

Dai: When we [Qualcomm] sell our 4G chips, we have to go through a lot of certifications. It’s often a differentiating factor.

Q: For IoT you need power management and verification that includes analog. Comments?

Rowen: Small, cheap sensor nodes tend to be very analog-rich, lower scale in terms of digital content, and have lots of software. Part of understanding what’s different about standardization is built on understanding what’s different about the design process, and what does it mean to have a software-rich and analog-rich world.

Dai: Analog is important in this era of IoT. Analog needs to come into the standards community.

Richard Goering

Cadence Blog Posts About DAC 2015

Gary Smith at DAC 2015: How EDA Can Expand Into New Directions

DAC 2015: Google Smart Contact Lens Project Stretches Limits of IC Design

DAC 2015: Lip-Bu Tan, Cadence CEO, Sees Profound Changes in Semiconductors and EDA

DAC 2015: “Level of Compute in Vision Processing Extraordinary” – Chris Rowen

DAC 2015: Can We Build a Virtual Silicon Valley?

DAC 2015: Cadence Vision-Design Presentation Wins Best Paper Honors

 

 

 




era

About using Liberate to create .lib for a cell with two separate outputs.

Hello, my name is Hsukang. I want to use Liberate to create a .lib file for the following circuit. This is a scan FF with two separate outputs.   The question is that no matter how I described its function, the synthesis tool said its a manformed scan FF.  Has anyone ever encountered anything like this?How should I describe the function correctly?I found that almost standard flip-flop cells are with only one output Q or have Qn at the same time. Does Liberate support scan flip-flop cells with two separate outputs ?

Thanks.





era

Verilog Code to Custom IC Layout generation

Hello everyone,

I am Vinay and I am currently developing some digital circuits for my chip design for my master's thesis at University at Buffalo.

I am fairly very new to Verilog and I don't seem to follow some of the things others find very easy.

Following are the things that I want to do to which I have no clue:

1. Develop certain arithmetic functionality in Verilog

2. Generate netlist for the verilog code

3. Feed the netlist file to Cadence encounter to be able to generate Digital Circuits' layout for my chip

I can use Cadence Virtuoso and Encounter for this but I don't know the exact procedure to get this done.

Could someone please describe the detailed process for doing the things mentioned above.

Thank you.




era

Interaction between Innovus and Virtuoso through OA database

Hello,

I created a floorplan view in Virtuoso ( it contains pins and blockages). I am trying to run PnR in Innovus for floorplan created in Virtuoso. I used  set vars(oa_fp)    "Library_name cell_name view_name"   to read view from virtuoso. I am able to see pins in Innovus but not the blockages. Can i know how do i get the blockages created in virtuoso to Innovus.

Regards,
Amuu 




era

Viewing RTL Code Coverage reports with XCELIUM

Hi,

There was tool available with INCISIV called imc to view the coverage reports.

The question is: How can we view the code coverage reports generated with XCELIUM? I think imc is not available with XCELIUM?

Thanks in advance.




era

Merge BBOX in hierarchical layout

Hi Team,

Problem Statement:In hierarchical layout, I want to get BBOX of particular layer without actually flattening the layout.

Description:The layer can be at any hierarchical depth i.e both from PCELL or shapes but at top level if they are overlapping then I want the merged BBOX.

Now, I am able to get BBOX of all the shapes present at different hierarchy.But i finding issue in merging BBOX.

Please can help me on the same issue as I require efficient way to merge the BBOX because list containing the BBOX is huge.

Thanks in advance.

Regrads,

Prasanna




era

Preparing Accellera Portable Stimulus Standard for Ratification

The Accellera Portable Stimulus Working Group met at the DVCon 2018 to move the process forward towards ratification. While we can't predict exactly when it will be ratified, the goal is now more clearly in sight! Cadence booth was busy with a lo...(read more)




era

AMIQ and Cadence demonstrate Accellera PSS v1.0 interoperability

There’s nothing like the heat of a DAC demo to stress new technology and the engineers behind it! Such was the case at DAC 2018 at the new locale of Moscone Center West, San Francisco. Cadence and AMIQ were two of several vendors who announced ...(read more)




era

Generating IBIS models in cadence virtuoso

I'm trying to generate IBIS models for the parts that I'm designing.  I'm designing using CADENCE Virtuoso.  

I'm wondering if there is a tutorial for generating IBIS models in CADENCE Virtuoso.   Please pardon me if my question is broad.      




era

Specman’s Callback Coverage API

Our customers’ tests have become more complex, longer, and consume more resources than before. This increases the need to optimize the regression while not compromising on coverage. Some advanced...

[[ Click on the title to access the full blog on the Cadence Community site. ]]




era

Specman: Analyze Your Coverage with Python

In the former blog about Python and Specman: Specman: Python Is here!, we described the technical information around Specman-Python integration. Since Python provides so many easy to use existing libraries in various fields, it is very tempting to leverage these cool Python apps.

Coverage has always been the center of the verification methodology, however in the last few years it gets even more focus as people develop advanced utilities, usually using Machine Learning aids. Anyhow, any attempt to leverage your coverage usually starts with some analysis of the behavior and trends of some typical tests. Visualizing the data makes it easier to understand, analyze, and communicate. Fortunately, Python has many Visualization libraries.

In this blog, we show an example of how you can use the plotting Python library (matplotlib) to easily display coverage information during a run. In this blog, we use the Specman Coverage API to extract coverage data, and a Python module to display coverage grades interactively during a single run and the way to connect both.

Before we look at the example, if you have read the former blog about Specman and Python and were concerned about the fact that python3 is not supported, we are glad to update that in Specman 19.09, Python3 is now supported (in addition to Python2).

The Testcase
Let’s say I have a stable verification environment and I want to make it more efficient. For example: I want to check whether I can make the tests shorter while hardly harming the coverage. I am not sure exactly how to attack this task, so a good place to start is to visually analyze the behavior of the coverage on some typical test I chose. The first thing we need to do is to extract the coverage information of the interesting entities. This can be done using the old Coverage API. 

Coverage API
Coverage API is a simple interface to extract coverage information at a certain point. It is implemented through a predefined struct type named user_cover_struct. To use it, you need to do the following:

  1. Define a child of user_cover_structusing like inheritance (my_cover_struct below).
  2. Extend its relevant methods (in our example we extend only the end_group() method) and access the relevant members (you can read about the other available methods and members in cdnshelp).
  3. Create an instance of the user_cover_structchild and call the predefined scan_cover() method whenever you want to query the data (even in every cycle). Calling this method will result in calling the methods you extended in step 2.  

 The code example below demonstrates these three steps. We chose to extend the end_group() method and we keep the group grade in some local variable. Note that we divide it by 100,000,000 to get a number between 0 to 1 since the grade in this API is an integer from 0 to 100,000,000. 

 struct my_cover_struct like user_cover_struct {
      !cur_group_grade:real;
   
      //Here we extend user_cover_struct methods
      end_group() is also {
      cur_group_grade = group_grade/100000000;        
      }
};
 
extend sys{
      !cover_info : my_cover_struct;
       run() is also {
          start monitor_cover ();
     };
     
     monitor_cover() @any is {
         cover_info = new;
         
         while(TRUE) {
             // wait some delay, for example –
             wait [10000] * cycles;
          
            // scan the packet.packet_cover cover group
            compute cover_info.scan_cover("packet.packet_cover");
          };//while
      };// monitor_cover
};//sys

Pass the Data to a Python Module
After we have extracted the group grade, we need to pass the grade along with the cycle and the coverage group name (assuming there are a few) to a Python module. We will take a look at the Python module itself later. For now, we will first take a look at how to pass the information from the e code to Python. Note that in addition to passing the grade at certain points (addVal method), we need an initialization method (init_plot) with the number of cycles, so that the X axis can be drawn at the beginning, and end_plot() to mark interesting points on the plot at the end. But to begin with, let’s have empty methods on the Python side and make sure we can just call them from the e code.

 # plot_i.py
def init_plot(numCycles):
    print (numCycles)
def addVal(groupName,cycle,grade):
    print (groupName,cycle,grade)
def end_plot():
    print ("end_plot") 

And add the calls from e code:

struct my_cover_struct like user_cover_struct {
     @import_python(module_name="plot_i", python_name="addVal")
     addVal(groupName:string, cycle:int,grade:real) is imported;
  
     !cur_group_grade:real;
  
     //Here we extend user_cover_struct methods
     end_group() is also {
         cur_group_grade = group_grade/100000000;
         
        //Pass the values to the Python module
         addVal(group_name,sys.time, cur_group_grade);      
     }   //end_group
};//user_cover_struct
 
extend sys{
     @import_python(module_name="plot_i", python_name="init_plot"
     init_plot(numCycles:int) is imported;
    
     @import_python(module_name="plot_i", python_name="end_plot")
     end_plot() is imported;
    
     !cover_info : my_cover_struct;
     run() is also {
         start scenario();
    };
    
    scenario() @any is {
         //initialize the plot in python
         init_plot(numCycles);
        
         while(sys.time<numCycles)
        {
             //Here you add your logic     
             
            //get the current coverage information for packet
            cover_info = new;
            var num_items:=  cover_info.scan_cover("packet.packet_cover");
           
            //Here you add your logic       
        
        };//while
        
        //Finish the plot in python
        end_plot();
   
    }//scenario
}//sys
 
  • The green lines define the methods as they are called from the e
  • The blue lines are pre-defined annotations that state that the method in the following line is imported from Python and define the Python module and the name of the method in it.
  • The red lines are the calls to the Python methods.

 Before running this, note that you need to ensure that Specman finds the Python include and lib directories, and Python finds our Python module. To do this, you need to define a few environment variables: SPECMAN_PYTHON_INCLUDE_DIR, SPECMAN_PYTHON_LIB_DIR, and PYTHONPATH. 

 The Python Module to Draw the Plot
After we extracted the coverage information and ensured that we can pass it to a Python module, we need to display this data in the Python module. There are many code examples out there for drawing a graph with Python, especially with matplotlib. You can either accumulate the data and draw a graph at the end of the run or draw a graph interactively during the run itself- which is very useful especially for long runs.

Below is a code that draws the coverage grade of multiple groups interactively during the run and at the end of the run it prints circles around the maximum point and adds some text to it. I am new to Python so there might be better or simpler ways to do so, but it does the work. The cool thing is that there are so many examples to rely on that you can produce this kind of code very fast.

# plot_i.py
import matplotlib
import matplotlib.pyplot as plt
plt.style.use('bmh')
#set interactive mode
plt.ion()
fig = plt.figure(1)
ax = fig.add_subplot(111)
# Holds a specific cover group
class CGroup:
    def __init__(self, name, cycle,grade ):
        self.name = name
        self.XCycles=[]
        self.XCycles.append(cycle)
        self.YGrades=[]
        self.YGrades.append(grade)  
        self.line_Object= ax.plot(self.XCycles, self.YGrades,label=name)[-1]             
        self.firstMaxCycle=cycle
        self.firstMaxGrade=grade
    def add(self,cycle,grade):
        self.XCycles.append(cycle)
        self.YGrades.append(grade)
        if grade>self.firstMaxGrade:
            self.firstMaxGrade=grade
            self.firstMaxCycle=cycle          
        self.line_Object.set_xdata(self.XCycles)
        self.line_Object.set_ydata(self.YGrades)
        plt.legend(shadow=True)
        fig.canvas.draw()
     
#Holds all the data of all cover groups   
class CData:
    groupsList=[]
    def add (self,groupName,cycle,grade):
        found=0
        for group in self.groupsList:
            if groupName in group.name:
                group.add(cycle,grade)
                found=1
                break
        if found==0:
            obj=CGroup(groupName,cycle,grade)
            self.groupsList.append(obj)
     
    def drawFirstMaxGrade(self):
        for group in self.groupsList:
            left, right = plt.xlim()
            x=group.firstMaxCycle
            y=group.firstMaxGrade
           
            #draw arrow
            #ax.annotate("first maximum grade", xy=(x,y),
            #xytext=(right-50, 0.4),arrowprops=dict(facecolor='blue', shrink=0.05),)
           
            #mark the points on the plot
            plt.scatter(group.firstMaxCycle, group.firstMaxGrade,color=group.line_Object.get_color())
          
            #Add text next to the point   
            text='cycle:'+str(x)+' grade:'+str(y)   
            plt.text(x+3, y-0.1, text, fontsize=9,  bbox=dict(boxstyle='round4',color=group.line_Object.get_color()))                                                                      
       
#Global data
myData=CData()
 
#Initialize the plot, should be called once
def init_plot(numCycles):
    plt.xlabel('cycles')
    plt.ylabel('grade')   
    plt.title('Grade over time')  
    plt.ylim(0,1)
    plt.xlim(0,numCycles)
 
#Add values to the plot
def addVal(groupName,cycle,grade):
    myData.add(groupName,cycle,grade)
#Mark interesting points on the plot and keep it shown
def end_plot():
    plt.ioff();
    myData.drawFirstMaxGrade(); 
   
    #Make sure the plot is being shown
    plt.show();
#uncomment the following lines to run this script with simple example to make sure #it runs properly regardless of the Specman interaction
#init_plot(300)
#addVal("xx",1,0)
#addVal("yy",1,0)
#addVal("xx",50,0.3)
#addVal("yy",60,0.4)
#addVal("xx",100,0.8)
#addVal("xx",120,0.8)
#addVal("xx",180,0.8)
#addVal("yy",200,0.9)
#addVal("yy",210,0.9)
#addVal("yy",290,0.9)
#end_plot()
 

 In the example we used, we had two interesting entities: packet and state_machine, thus we had two equivalent coverage groups. When running our example connecting to the Python module, we get the following graph which is displayed interactively during the run.

 

    

 

When analyzing this specific example, we can see two things. First, packet gets to a high coverage quite fast and significant part of the run does not contribute to its coverage. On the other hand, something interesting happens relating to state_machine around cycle 700 which suddenly boosts its coverage. The next step would be to try to dump graphic information relating to other entities and see if something noticeable happens around cycle 700.

To run a complete example, you can download the files from: https://github.com/okirsh/Specman-Python/

Do you feel like analyzing the coverage behavior in your environment? We will be happy to hear about your outcomes and other usages of the Python interface.

Orit Kirshenberg
Specman team




era

Specman’s Callback Coverage API

Our customers’ tests have become more complex, longer, and consume more resources than before. This increases the need to optimize the regression while not compromising on coverage.

Some advanced customers of Specman use Machine Learning based solutions to optimize the regressions while some use simpler solutions. Based on a request of an advanced customer, we added a new Coverage API in Specman 19.09 called Coverage Callback. In 20.03, we have further enhanced this API by adding more options. Now there are two Coverage APIs that provide coverage information during the run (the old scan_cover API and this new Callback API). This blog presents these two APIs and compares between them while focusing on the newer one.

Before we get into the specifics of each API, let’s discuss what is common between these APIs and why we need them. Typically, people observe the coverage model after the test ends, and get to know the overall contribution of the test to the coverage. With these two APIs, you can observe the coverage model during the test, and hence, get more insight into the test progress.

Are you wondering about what you can do with this information? Let’s look at some examples.

  1. Recognize cases when the test continues to run long after it already reached its coverage goal.
  2. View the performance of the coverage curve. If a test is “stuck” at the same grade for a long time, this might indicate that the test is not very good and is just a waste of resource.

These analyses can be performed in the test itself, and then a test can decide to either stop the run, or change something in it configuration, or – post run. You can also present them visually for some analysis, as shown in the blog: Analyze Your Coverage with Python.

scan_cover API (or “Scanning the Coverage Model”)

With this API you can get the current status for any cover group or item you are interested in at any point in time during the test (by calling scan_cover()). It is very simple to use; however it has performance penalty. For getting the coverage grade of any cover group during the test, you should
1. Trigger the scan_cover at any time when you want the coverage model to be scanned.
2. Implement the scan_cover related methods, such as start_item() and end_bucket(). In these methods, you can query the current grade of group/item/bucket.
The blog mentioned earlier: Analyze Your Coverage with Python describes the details and provides an example.

Callback API

The Callback API enables you to get a callback for a desired cover group(s), whenever it is sampled. This API also provides various query methods for getting coverage related information such as what the current sampled value is. So, in essence, it is similar to scan_cover API, but as the phrase says: “same same but different”:

  1. Callback API has almost no performance penalty while scan_cover API does.
  2. Callback API contains a richer set of query methods that provide a lot of information about the current sampled value (vs just the grade with scan_cover).
  3. Using scan_cover API, you decide when you want to query the coverage information (you call scan_cover), while with the Callback API you query the coverage information when the coverage is sampled (from do_callback). So, scan_cover gives you more flexibility, but you do need to find the right timing for this call.

There is no absolute advantage of either of these APIs, this only depends on what you want to do.  

Callback API details

The Callback API is based on a predefined struct called: cover_sampling_callback. In order to use this API, you need to:

  1. Define a struct inheriting cover_sampling_callback (cover_cb_save_data below)
    1. Extend the predefined do_callback() method. This method is a hook being called whenever any of the cover groups that are registered to the cover_sampling_callback instance is being sampled.
    2. From do_callback() you can access coverage data by using queries such as: is_currently_per_type(), get_current_group_grade() and get_current_cover_group() (as in the example below) and many more such as: get_relevant_group_layers() and get_simple_cross_sampled_bucket_name().
  2. Register the desired cover group(s) to this struct instance using the register() method.

Take a look at the following code:

// Define a coverage callback.
// Its behavior – print to screen the current grade.
struct cover_cb_save_data like cover_sampling_callback {
    do_callback() is only {
       // In this example, we care only about the per_type grade, and not per_instance
       if is_currently_per_type() {           
            var cur_grade : real = get_current_group_grade();
            sys.save_data (get_current_cover_group().get_name(), cur_grade);
        };//if
    };//do_callback()
};// cover_cb_send_data


extend sys {
    !cb : cover_cb_save_data;

   // Instantiate the coverage callback, and register to it two of my coverage groups
    run() is also {
       cb  = new  with {
        var gr1:=rf_manager.get_struct_by_name("packet").get_cover_group("packet_cover");
        .register(gr1);
        var gr2:=rf_manager.get_struct_by_name("sys").get_cover_group("mem_cover");
       .register(gr2);
       };//new  
    };//run()

  save_data(group_name : string, group_grade : real) is {
        //here you either print the values to the screen, update a graph you show or save to a db 
  };// save_data
};//sys

In the blog Analyze Your Coverage with Python mentioned above, we show an example of how you can use the scan_cover API to extract coverage information during the run, and then use the Specman-Python API to display the coverage interactively during the run (using plotting Python library - matplotlib). If you find this usage interesting and you want to use the same example, by the Callback API instead of the scan_cover API, you can download the full example from GIT from here: https://github.com/efratcdn/cover_callback.

Specman Team

 

 




era

IEEE 1801/UPF Tutorial from Accellera—Watch and Learn

If you weren't able to attend the 2013 DVCon, you missed out on a great IEEE 1801/UPF tutorial delivered by members of the IEEE committee. Accellera had the event recorded and that recording is now posted on the Accellera.org website. Regardless of your work so far with low power design and verification, you need to watch this video.

Power management is becoming ubiquitous in our world. The popular aspect is that reduced power is good for the evironment and that is true. But for those teams that have been building chips around the 40nm node and below, there is another truth. Power management is required simply to get working silicon in many cases. As the industry expands the number of designs with power management and forges deeper into advanced nodes, we steadily identify improvements to the power format descriptions. The most recent set of imporvements to the IEEE 1801 standard are now available in the 2013 version of that standard.

To help bring the standard to life, five representatives from the IEEE joined to deliver a tutorial at DVCon in 2013. Qi Wang (Cadence), Erich Marschner (Mentor), Jeffrey Lee (Synopsys), John Biggs (ARM), and Sushma Honnavarra-Prasad (Broadcom) each contributed to the tutorial. It started with a review of the UPF basics that led to the IEEE 1801 standard delivered by the EDA companies. The IEEE 1801 users then presented tutorial content on how to apply the standard. The session then concluded with a look forward to the IEEE 1801-2013 (UPF 2.1) standard. The standard was released two months after the DVCon tutorial and is available through the Accellera Get program.

So after the bowl games are over and you'vre returned through the woods and back over the river from Grandma's, grab a cup of hot cocoa and learn more about the power standards you may well be using in 2014.

Regards,

Adam "The Jouler" Sherer




era

IMC: toggle coverage for package array

Hello!

I have input signal like this  ->  input  wire [ADM_NUM-1:0][1:0] m_axi_ddr_rresp.

When i want to analyze coverage from IMC  this signal not covered!

Can i collect coverage for this signal?

 




era

Can't collect AXI4 burst_started coverage

I have a problem connected with my AXI4 coverage.

I enable coverage collection in AXI4 

      set_config_int("axi4_active_slave_agent_0.monitor.coverModel", "burst_started_enable", 1);
      set_config_int("axi4_active_slave_agent_0.monitor.coverModel", "coverageEnable", 1);

but i don't have a result.

I think the problem in Callback, but i try to connect all callback and i don't have positive result.

Can you help me?




era

Coverage error

Hi  all,

          I am getting this warning in while generating the coverage report, can you help me to clear this warning?

ncsim: *W,COVOPM: Coverage configuration file command "set_covergroup -optimize_model" can be specified to improve the performance and scalability of coverage model containing SV covergroups. It may be noted that subsequent merging of a coverage database saved with this command and a coverage database saved without this command is not allowed.




era

Is it possible to get a diff between two coverage databases in IMC?

I'm in the process of weeding a regression test list. I have a coverage database from the full regression list and would like to diff it with the coverage database from the new reduced regression test list. If possible I would than like to trace back any buckets covered with the full list, but not with the partial list, into the original tests that covered them.

Is that possible using IMC? if not, is it possible to do from Specman itself?

(Note that we're not using vManager)

Thanks,

Avidan




era

Extrowords #97: Generalissimo 68

Sample clues

18 across: Makoto Hagiwara and David Jung both claim to have invented it (7,6)

1 down: French impressionist who rejected that term (5)

3 down: Artificial surface used for playing hockey (9)

7 down: The sequel to Iliad (7)

12 down: Adipose tissue (4,3)

Extrowords © 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




era

Extrowords #98: Generalissimo 69

Sample clues

6 across: Franchise revived by Frank Miller (6)

13 across: What Keanu Reeves and Zayed Khan have in common (5)

18 across: What Frank Sinatra and George Clooney have in common (6,6)

19 across: Dosa mix, for example (6)

2 down: Green, in a non-environmental way (7)

Extrowords © 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




era

Extrowords #99: Generalissimo 70

Sample clues

5 down: Torso covering (6)

7 down: Government by rogues (12)

15 across: eBay speciality (7)

18 across: Demonic (8)

20 across: Common language (6,6)

Extrowords © 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




era

Extrowords #100: Generalissimo 71

Sample clues

17 across: Beckham speciality (4,4)

4 down: Havana speciality (5)

19 across: Infamous 1988 commercial against Michael Dukakis (9,4)

11 down: Precisely (2,3,3)

13 down: City infamously ransacked by the Japanese in 1937 (7)

Extrowords © 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




era

Extrowords #101: Generalissimo 72

Sample clues

11 across: Chandigarh’s is 0172 (3,4)

21 across: He’s a loser, baby (4)

1 down: Garment meant to shape the torso (6)

12 down: It’s slogan: “Life, Liberty and the Pursuit” (8)

18 down: Noise made by badminton players? (6)

Extrowords © 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




era

Extrowords #102: Generalissimo 73

Sample clues

5 across: The US president’s bird (3,5,3)

11 down: Group once known as the Quarrymen (7)

10 across: Cavalry sword (5)

19 across: Masonic ritual (5,6)

1 down: Pioneer of Ostpolitik (6)

Extrowords © 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




era

Extrowords #103: Generalissimo 74

Sample clues

14 across: FDR’s baby (3,4)

1 down: A glitch in the Matrix? (4,2)

4 down: Slanted character (6)

5 down: New Year’s venue in New York (5,6)

16 down: Atmosphere of melancholy (5)

Extrowords © 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




era

Extrowords #104: Generalissimo 74

Sample clues

6 across: Alejandro González Iñárritu’s breakthrough film (6,6)

19 across: Soft leather shoe (8)

7 down: Randroids, for example (12)

12 down: First American World Chess Champion (7)

17 down: Circle of influence (5)

Extrowords © 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




era

Extrowords #105: Generalissimo 75

Sample clues

5 across: Robbie Robertson song about Richard Manuel (6,5)

2 down: F5 on a keyboard (7)

10 across: Lionel Richie hit (5)

3 down: ALTAIR, for example (5)

16 down: The problem with Florida 2000 (5)

Extrowords © 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




era

Extrowords #106: Generalissimo 76

Sample clues

9 across: Van Morrison classic from Moondance (7)

6 down: Order beginning with ‘A’ (12)

6 across: Fatal weakness (8,4)

19 across: Rolling Stones classic (12)

4 down: Massacre tool (8)

Extrowords © 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




era

allegro schematic Hierarchy

Hello, I need to ask a question regarding hierarchy in allegro schematic. I have created one but now i need to make changes to one section so that its not reflected into the other?




era

Production files generation

I have a question regarding the production files of a PCB. I have added two cutouts on my PCB.
When I generate my drill file these do not appear, only the holes of the tracks and the insert components appear. What do I need to do to make cutouts appear in my drill file?




era

Why a new Package update generate DRC error after waiving ?

I've redesigned a custom TO220FLAT Package

First I created a TO220shape.ssm  with PCB Editor. Then I created a surface mount T220build.pad in Padstack Editor using TO220shape.ssm. Then I created a TO220FLAT.psm in PCB Editor. I placed 3 Connect pins and 9 Mechanical pins for the TO220 TAB, using standard through-hole pads for better current handling.

Adding those Mechanical pins created many DRC errors caused by the proximity of those pads attached to the TO220shape.

Thru Pin to SMD Pin Spacing (-200.0 0.0) 5 MIL OVERLAP DEFAULT NET SPACING CONSTRAINTS Mechanical Pin "Pad50sq30d" Pin "T220build, 2"

I corrected the situation (so I though) by Waiving those DRC errors, thinking that they could not cause any problem and because that’s what I want, i.e.: 9 through-holes under the TO220 device. The idea being that when this device is mounted flat on the PCB it could carry lots of current via 9 pads that could make a good high current conductor to inner layers.

I then saved the Package and updated all related footprint schematic parts  in Capture. Created a new Netlist. Then I imported the new logic into PCB Editor to reflect that change. When the File > Import > Logic is finished I get no feedback error! (which, for me is a substantial achievement in itself)

Now, in the Design Window I see all those DRC errors popping up again, despite the fact that I waived those DRCs back in the Padstack edition. If I run a Design Rule Check (DRC) Report I will see all those DRC listed again. Now, I understand that I can go ahead and waive all those DRCs (100 in total) but I’m thinking there is got to be a better way of doing this.

Please, any advise is welcome. Thanks

 




era

Specman Makefile generator utility

I've heard lots of people asking for a way to generate Makefiles for Specman code, and it seems there are some who don't use "irun" which takes care of this automatically. So I wrote this little utility to build a basic Makefile based on the compiled and loaded e code.

It's really easy to use: at any time load the snmakedeps.e into Specman, and use "write makefile <name> [-ignore_test]".
This will dump a Makefile with a set of variables corresponding to the loaded packages, and targets to build any compiled modules.
Using -ignore_test will avoid having the test file in the Makefile, in case you switch tests often (who doesn't?).

It also writes a stub target so you can do "make stub_ncvlog" or "make stub vhdl" etc.

The targets are pretty basic, I thought it was more useful to #include this into the main Makefile and define your own more complex targets / dependencies as required.

The package uses the "reflection" facility of the e language, which is now documented since Specman 8.1, so you can extend this utility if you want (please share any enhancements you make).

 Enjoy! :-)

Steve.




era

Creating transition coverage bins using a queue or dynamically

I want to write a transition coverage on an enumeration. One of the parts of that transition is a queue of the enum. I construct this queue in my constructor. Considering the example below, how would one go about it.

In my coverage bin I can create a range like this A => [queue1Enum[0]:queue1Enum[$]] => [queue2Enum[0]:queue2Enum[$]]. But I only get first and last element then.

typedef enum { red, d_green, d_blue, e_yellow, e_white, e_black } Colors;
 Colors dColors[$];
 Colors eColors[$];
 Lcolors = Colors.first();
 do begin
  if (Lcolors[0].name=='d') begin
   dColors.push_back(Lcolors);
  end
  if (Lcolors[0].name=='e') begin
   eColors.push_back(Lcolors);
  end
 end while(Lcolors != Lcolors.first())

 covergroup cgTest with function sample(Colors c);
   cpTran : coverpoint c{
      bins t[] = (red => dColors =>eColors);   
   }
 endgroup

bins t[] should come out like this(red=>d_blue,d_green=>e_yellow,e_white)

 




era

Importing a capacitor interactive model from manufacturer

Hello,

I am trying to import (in spectre) an spice model of a ceramic capacitor manufactured by Samsung EM. The link that includes the model is here :-

http://weblib.samsungsem.com/mlcc/mlcc-ec.do?partNumber=CL05A156MR6NWR

They proved static spice model and interactive spice model.

I had no problem while including the static model.

However, the interactive model which models voltage and temperature coefficients seems to not be an ordinary spice model. They provide HSPICE, LTSPICE, and PSPICE model files and I failed to include any of them.

Any suggestions ?




era

Delay Degradation vs Glitch Peak Criteria for Constraint Measurement in Cadence Liberate

Hi,

This question is related to the constraint measurement criteria used by the Liberate inside view. I am trying to characterize a specific D flip-flop for low voltage operation (0.6V) using Cadence Liberate (V16). 

When the "define_arcs" are not explicitly specified in the settings for the circuit (but the input/outputs are indeed correct in define_cell), the inside view seems to probe an internal node (i.e. master latch output)  for constraint measurements instead of the Q output of the flip flop. So to force the tool to probe Q output I added following coder in constraint arcs :

# constraint arcs from CK => D
define_arc
-type hold
-vector {RRx}
-related_pin CP
-pin D
-probe Q
DFFXXX

define_arc
-type hold
-vector {RFx}
-related_pin CP
-pin D
-probe Q
DFFXXX

define_arc
-type setup
-vector {RRx}
-related_pin CP
-pin D
-probe Q
DFFXXX

define_arc
-type setup
-vector {RFx}
-related_pin CP
-pin D
-probe Q
DFFXXX

with -probe Q liberate identifies Q as the output, but uses Glitch-Peak criteria instead of delay degradation method. So what could be the exact reason for this unintended behavior ? In my external (spectre) spice simulation, the Flip-Flop works well and it does not show any issues in the output delay degradation when the input sweeps.

Thanks

Anuradha




era

Library Characterization Tidbits: Over the Clouds and Beyond with Arm-Based Graviton and Cadence Liberate Trio

Cadence Liberate Trio Characterization Suite, ARM-based Graviton Processors, and Amazon Web Services (AWS) Cloud have joined forces to cater to the High-Performance Computing, Machine Learning/Artificial Intelligence, and Big Data Analytics sectors. (read more)




era

Virtuosity: Concurrently Editing a Hierarchical Cellview

This blog discusses key features of concurrently editing a hierarchical cellview.(read more)




era

News18 Urdu: Latest News Perambalur

visit News18 Urdu for latest news, breaking news, news headlines and updates from Perambalur on politics, sports, entertainment, cricket, crime and more.




era

News18 Urdu: Latest News Hyderabad Urban

visit News18 Urdu for latest news, breaking news, news headlines and updates from Hyderabad Urban on politics, sports, entertainment, cricket, crime and more.




era

Linux Kernel 2.2/2.4 Local Root Ptrace Vulnerability