of

Metamorphic Testing: The Future of Verification?

Curious about what’s going on behind the scenes with verification? Bernard Murphy, Jim Hogan, and our own Paul Cunningham are on the case with the “Innovation in Verification” blog stream over at semiwiki.com. Every month, this trio reviews a newly-published paper in academia that pertains to verification and discusses its implications. Be sure to stop by—it’s a great place to see what might be coming down the pipeline someday.

This month, they discuss the implications of metamorphic testing. The purpose of metamorphic testing is to define a verification approach where is there is no “golden reference.” This situation comes up a lot now as designs grow in complexity, and it begs the question: “how does one know the design is verified if there is no standard to compare to?”. Metamorphic testing addresses the problem of not having a “gold standard” to compare to by comparing the results of related tests instead. The paper reviewed by this team used metamorphic testing to study methods of managing JavaScript tags.

Paul saw this as a valuable new class of coverage. Metamorphic testing represents a way to create better distribution analyses through understanding the relationships among tests. This can reveal critical-but-complex issues that traditional verification methods may overlook. He saw this as an emerging class of coverage that new verification tools could be built around. Paul asserted that a future metamorphic-testing-based tool’s main contribution to the field of verification would be to better analyze noisy performance results where the noise is multi-modal. It could be useful in detecting race conditions and similar hard-to-debug anomalies. Paul also sees metamorphic testing as ripe for ML techniques. Overall—Paul sees a bright future for metamorphic testing in verification.

Jim is reminded of Solido and Spice—these metamorphic testing capabilities are “more than just a feature”—they might be a product. Maybe even a whole new class of verification tools, as Paul said.

Bernard says that this topic is “too rich to address in one blog”, so be sure to head over to the post to see more of what the future has in store for verification.




of

BoardSurfers: Training Insights - Fundamentals of PDN for Design and PCB Layout

What is a Power Distribution Network (PDN) after all but resistance, inductance, and capacitance in the PCB and components? And, of course, it is there to deliver the right current and voltage to each component on your PCB. But is that all? Are there oth...(read more)




of

Insider Story of the New IEEE 1801-2013 (UPF 2.1) Standard

The IEEE has announced the publication of the new 1801-2013 standard, also known as UPF 2.1, and immediate availability for free download through the IEEE 1801-2013 Get Program. Even though the standard is new to the whole world, for the people of the IEEE working group this standard is finally done and is in the past now.

There is a Chinese saying "好事多磨" which means "good things take time to happen." I forgot the exact time when I first joined the working group for the new standard -- about two and half years ago -- but I do remember long hours of meetings and many "lively" debates and discussions. Since the "hard time" has passed us, I would like to share some fun facts about the working group and the standard.

  • The 1801 working group is the largest entity based ballot group in IEEE-SA history.
  • The new standard was initially planned for 2012, but was delayed purely due to the large amount of work required.
  • At one point, the group was debating on whether the new standard should be called UPF 2.1 or 3.0. It may sound weird now but we spent quite some time discussing this. Eventually we settled on 2.1 as it was the original plan.
  • The 1801-2013 document has 358 pages which is 53% thicker than previous version (the sheer amount of changes in the new standard indicate that this is more than just a normal incremental update of the previous version as suggested by naming it 2.1)
  • Around 300 real issues were reported over the previous version and a majority of them were fixed in the new release.
  • This is the first release with constructs and semantics coming from Common Power Format (CPF), a sign of convergence of the two industry leading power formats.
  • There are about 100 working group meetings in my Outlook calendar since 2011, with meeting times ranging from 2 hours to 8 hours.
  • We extensively used Google Drive (which was called Google Docs when the working group started), a great tool for productivity. I cannot imagine how any standard could have been done before Google existed!

Personally, I had an enjoyable journey, especially from having the privilege to work with many industry experts who are all passionate about low power. I do have one more thing to share though. My older daughter went from middle school to high school during the period of the development of the new standard. Since most of the meetings took place in the early morning California time, she had to endure the pain of listening to all these discussions on power domain, power switches, etc. on her way to school.

I asked her if she learned anything. She told me that other than being able to recognize the voices of Erich, John and Joe on the line, she also learned that she would never want to become an electrical or computer engineer! She was so happy that the meetings stopped a couple of months ago. But what I did not tell her is that the meetings will resume after DAC! Well, I am sure this will be a big motivation for her to get her own driving license in the summer.

If you want to get some quick technical insights into the new standard, check out my recent EE Times article IEEE 1801-2013: A bold step towards power format convergence.

Qi Wang

 




of

The Power of Big Iron

Key findings: 5X to 32X faster low-power verification using Palladium XP emulation

It’s hot in July in Korea, and not just the temperature; the ideas, too. The ideas that flowed at CDNLive Korea were exciting, and that includes a very interesting talk by Jiyeon Park from the System LSI division of Samsung Electronics.  His talk, titled “Enabling Low-Power Verification using Cadence Palladium XP,” struck a chord with the audience and the highlights bear sharing in this forum. This blog captures some of the highlights from the public talk in Seoul this summer.

Motivation

If you are familiar with the breadth of the product lines at Samsung Electronics, you will appreciate the diversity of the end-market requirements that they must fulfill. These markets and products include:

Mobile/Handheld

  • Smartphones
  • Tablets
  • Laptops

Consumer/Digital Home

  • High-definition/ultra-high-definition TV
  • Gaming consoles
  • Computers 

Networking/Data Center

  • Servers
  • Switches
  • Communications

What all of these markets have in common is that energy efficiency is now an integral and leading part of the value equation. For design teams, a good knowledge of power helps the evaluation and use of a host of critical decisions. From design architecture, IP make-versus-buy decisions, and manufacturing process selection, to the use of low-power design techniques, all are critically influenced by power.

Using simulation for low-power verification

Once the decision to overlay power reduction design techniques, such as power shutdown, has been made, new dimensions have been added to the already complex SoC verification task. The RTL verification environment is first augmented with a power intent file; in this case, IEEE 1801 was the format.  The inclusion of this power intent information enables the examination of power domain shutdown, isolation operations, proper retention, and level shifting.


Figure 1: Incisive SimVision power verification elements example

Low-power verification using emulation

Simulation for low-power verification works well, so why emulation? One word—complexity!  It is easy to forget that “design complexity” (usually measured in gates or transistors) is not that same as “verification complexity” (which is really hard to measure). Consider a design with four power domains, three of which are switchable and one that is switchable but also has high- and low-voltage states. That yields nine basic states, and 24 modes of operation to test. Although some of those modes may not be consequential, when paired with hundreds or even thousands of functional tests, you can begin to understand the impact of overlaying low power on the verification problem. Thus, it becomes very desirable to enlist the raw computational power of emulation.

Power off/on scenario on Palladium XP platform

A typical functional test would be augmented to include the power control signals. For power shutoff verification, for instance, the cycles for asserting isolation begin the sequence, followed by state retention, and then finally a power shutdown of the domain must be asserted to verify operation. The figure below calls out a number of checks that ought to be performed.


Figure 2: Power shutoff sequence and associated checks to make

IEEE 1801 support in Palladium XP platform

The IEEE 1801 support found in the Palladium PX platform includes some noteworthy capabilities, as well as some implications to the user. First is a patented memory randomization provided by the Palladium XP platform. This capability includes randomization of memory during shutdown and power up, control over read value during the power-off state, non-volatile memory state retention, and freezing of data on retention. The user should be aware there is about a 10%-20% capacity overhead associated with IEEE 1801-driven low-power verification.

Figure 3: Palladium low-power verification enables schedule improvement

Palladium low-power verification flow

The great thing about the emulation work flow for IEEE 1801 power verification is that the only change is to include that IEEE 1801 power intent file during the compilation stage!

Considerations for emulation environment bring-up

A Universal Verification Methodology (UVM) approach was taken by the Samsung team. This provides a unique structure to the testbench environment that is very conducive to a metric-driven methodology.  Using a testbench acceleration interface, teams can run the testbench on a software simulator and the design on the emulator. In addition, the formalism allows for the case of incomplete designs that do not hinder the verification of the parts that are completed.

Experimental results

The most exciting part of the paper was the results that were obtained. For a minor overhead cost in compile time and capacity, the team was able to improve runtimes of their tests by 5X to 32X. Being able run tests in a fraction of the time, or many more tests in the same time, has always been a benefit for emulation users. Now low-power verification is a proven part of the value provided to Palladium XP platform users.

Figure 4: Samsung low-power verification emulation results

Conclusions

The key conclusions found were:

  • No modification was needed for IEEE 1801
  • There is a small capacity and compile time overhead
  • The emulation and simulation match
  • The longer the test, the more the net speed up versus software simulation
  • Run times improved from 5X to 32X!

With this flow in place, the teams has begun power-aware testing that includes firmware and software verification to go along with the hardware testing. This expansion enables more capability in optimization of the power architecture. In addition, they are seeing faster silicon bring-up in the context of an applied low-power strategy.

Steve Carlson




of

gm of an active mixer

Hi all,

What is the most accurate way to simulate the gm of  RF transistors (RF stage) of an active mixer (single balanced or Gilbert cell)?

I tried to simulate it with many ways such as:

1. DC annotation (but of course its incorrect due to the switching operation of the mixer)

2. d(i_ds)/d(v_gs) using HB analysis and then taking the value at zero (since it is a DC characteristic). In this way I chose in the simulator results of HB: Voltage, spectrum, rms, magnitude. 

3. Using the OP, OPT buttons in the calculator and then extracting the gm of the transistor. 

The problem is that each way gives a different value which makes the procedure of designing an active mixer very difficult. In addition, when I simulate the voltage conversion gain of the active mixer and trying to compare it to the formula (2/pi)*gm*RL (either in linear or dB), I get numbers which are way too far from simulations. I understand that I would not get the same results but not different by hundreds percent. 

I see in many publications that people are plotting graphs of mixer's gm vs. different parameters and starting to doubt whether the results are correct.

I would appreciate any help,

Thanks in advance




of

Noise figure optimization of LNA

Hello, I looking for reaching minimal NOISE FIRUGE by peaking a combination of the RLC on the output load as shown bellow.
I tried to do it with PNOISE  but it gives me an error on the |f(in)| i am not sure regarding the sidebands in my case(its not a mixer)

How can i show Noise Figure and optimize it using my output RLC?

Thanks





of

How to get the location of Assembly Line

Hi 

I'm trying to find the location of the assembly line in the design automatically without using "Show Element". And also I want to find the end points of that line. The line exists in "Package Geometry/Assembly_Top" Layer. So is there any code snippet to find the location of assembly line?




of

Inconsistent behaviour of warn() between Virtuoso and Allegro

For a project, we depend on capturing warnings. This works fine in Virtuoso but behaves differently in Allegro.

In our observations

Virtuoso:

>>> warn("Hello")

*WARNING* Hello

Allegro:

>>> warn("Hello")

*WARNING* Hello

But when we capture the warning:

Virtuoso:

>>> warn("Hello") getWarn()

"Hello"

Allegro:

>>> warn("Hello") getWarn()

"*WARNING* Hello"

This is a Problem for because we put an empty String in the warn and depend on the fact that no Warning results in an empty String but on Allegro the output always begins with *WARNING*

Is there a way to make the behavior consistent in both versions?




of

Farmers, Technology and Freedom of Choice: A Tale of Two Satyagrahas

This is the 23rd installment of The Rationalist, my column for the Times of India.

I had a strange dream last night. I dreamt that the government had passed a law that made using laptops illegal. I would have to write this column by hand. I would also have to leave my home in Mumbai to deliver it in person to my editor in Delhi. I woke up trembling and angry – and realised how Indian farmers feel every single day of their lives.

My column today is a tale of two satyagrahas. Both involve farmers, technology and the freedom of choice. One of them began this month – but first, let us go back to the turn of the millennium.

As the 1990s came to an end, cotton farmers across India were in distress. Pests known as bollworms were ravaging crops across the country. Farmers had to use increasing amounts of pesticide to keep them at bay. The costs of the pesticide and the amount of labour involved made it unviable – and often, the crops would fail anyway.

Then, technology came to the rescue. The farmers heard of Bt Cotton, a genetically modified type of cotton that kept these pests away, and was being used around the world. But they were illegal in India, even though no bad effects had ever been recorded. Well, who cares about ‘illegal’ when it is a matter of life and death?

Farmers in Gujarat got hold of Bt Cotton seeds from the black market and planted them. You’ll never guess what happened next. As 2002 began, all cotton crops in Gujarat failed – except the 10,000 hectares that had Bt Cotton. The government did not care about the failed crops. They cared about the ‘illegal’ ones. They ordered all the Bt Cotton crops to be destroyed.

It was time for a satyagraha – and not just in Gujarat. The late Sharad Joshi, leader of the Shetkari Sanghatana in Maharashtra, took around 10,000 farmers to Gujarat to stand with their fellows there. They sat in the fields of Bt Cotton and basically said, ‘Over our dead bodies.’ ¬Joshi’s point was simple: all other citizens of India have access to the latest technology from all over. They are all empowered with choice. Why should farmers be held back?

The satyagraha was successful. The ban on Bt Cotton was lifted.

There are three things I would like to point out here. One, the lifting of the ban transformed cotton farming in India. Over 90% of Indian farmers now use Bt Cotton. India has become the world’s largest producer of cotton, moving ahead of China. According to agriculture expert Ashok Gulati, India has gained US$ 67 billion in the years since from higher exports and import savings because of Bt Cotton. Most importantly, cotton farmers’ incomes have doubled.

Two, GMO crops have become standard across the world. Around 190 million hectares of GMO crops have been planted worldwide, and GMO foods are accepted in 67 countries. The humanitarian benefits have been massive: Golden Rice, a variety of rice packed with minerals and vitamins, has prevented blindness in countless new-born kids since it was introduced in the Philippines.

Three, despite the fear-mongering of some NGOs, whose existence depends on alarmism, the science behind GMO is settled. No harmful side effects have been noted in all these years, and millions of lives impacted positively. A couple of years ago, over 100 Nobel Laureates signed a petition asserting that GMO foods were safe, and blasting anti-science NGOs that stood in the way of progress. There is scientific consensus on this.

The science may be settled, but the politics is not. The government still bans some types of GMO seeds, such as Bt Brinjal, which was developed by an Indian company called Mahyco, and used successfully in Bangladesh. More crucially, a variety called HT Bt Cotton, which fights weeds, is also banned. Weeding takes up to 15% of a farmer’s time, and often makes farming unviable. Farmers across the world use this variant – 60% of global cotton crops are HT Bt. Indian farmers are so desperate for it that they choose to break the law and buy expensive seeds from the black market – but the government is cracking down. A farmer in Haryana had his crop destroyed by the government in May.

On June 10 this year, a farmer named Lalit Bahale in the Akola District of Maharashtra kicked off a satyagraha by planting banned seeds of HT Bt Cotton and Bt Brinjal. He was soon joined by thousands of farmers. Far from our urban eyes, a heroic fight has begun. Our farmers, already victimised and oppressed by a predatory government in countless ways, are fighting for their right to take charge of their lives.

As this brave struggle unfolds, I am left with a troubling question: All those satyagrahas of the past by our great freedom fighters, what were they for, if all they got us was independence and not freedom?

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.




of

For this Brave New World of cricket, we have IPL and England to thank

This is the 24th installment of The Rationalist, my column for the Times of India.

Back in the last decade, I was a cricket journalist for a few years. Then, around 12 years ago, I quit. I was jaded as hell. Every game seemed like déjà vu, nothing new, just another round on the treadmill. Although I would remember her fondly, I thought me and cricket were done.

And then I fell in love again. Cricket has changed in the last few years in glorious ways. There have been new ways of thinking about the game. There have been new ways of playing the game. Every season, new kinds of drama form, new nuances spring up into sight. This is true even of what had once seemed the dullest form of the game, one-day cricket. We are entering into a brave new world, and the team leading us there is England. No matter what happens in the World Cup final today – a single game involves a huge amount of luck – this England side are extraordinary. They are the bridge between eras, leading us into a Golden Age of Cricket.

I know that sounds hyperbolic, so let me stun you further by saying that I give the IPL credit for this. And now, having woken up you up with such a jolt on this lovely Sunday morning, let me explain.

Twenty20 cricket changed the game in two fundamental ways. Both ended up changing one-day cricket. The first was strategy.

When the first T20 games took place, teams applied an ODI template to innings-building: pinch-hit, build, slog. But this was not an optimal approach. In ODIs, teams have 11 players over 50 overs. In T20s, they have 11 players over 20 overs. The equation between resources and constraints is different. This means that the cost of a wicket goes down, and the cost of a dot ball goes up. Critically, it means that the value of aggression rises. A team need not follow the ODI template. In some instances, attacking for all 20 overs – or as I call it, ‘frontloading’ – may be optimal.

West Indies won the T20 World Cup in 2016 by doing just this, and England played similarly. And some sides began to realise was that they had been underestimating the value of aggression in one-day cricket as well.

The second fundamental way in which T20 cricket changed cricket was in terms of skills. The IPL and other leagues brought big money into the game. This changed incentives for budding cricketers. Relatively few people break into Test or ODI cricket, and play for their countries. A much wider pool can aspire to play T20 cricket – which also provides much more money. So it makes sense to spend the hundreds of hours you are in the nets honing T20 skills rather than Test match skills. Go to any nets practice, and you will find many more kids practising innovative aggressive strokes than playing the forward defensive.

As a result, batsmen today have a wider array of attacking strokes than earlier generations. Because every run counts more in T20 cricket, the standard of fielding has also shot up. And bowlers have also reacted to this by expanding their arsenal of tricks. Everyone has had to lift their game.

In one-day cricket, thus, two things have happened. One, there is better strategic understanding about the value of aggression. Two, batsmen are better equipped to act on the aggressive imperative. The game has continued to evolve.

Bowlers have reacted to this with greater aggression on their part, and this ongoing dialogue has been fascinating. The cricket writer Gideon Haigh once told me on my podcast that the 2015 World Cup featured a battle between T20 batting and Test match bowling.

This England team is the high watermark so far. Their aggression does not come from slogging. They bat with a combination of intent and skills that allows them to coast at 6-an-over, without needing to take too many risks. In normal conditions, thus, they can coast to 300 – any hitting they do beyond that is the bonus that takes them to 350 or 400. It’s a whole new level, illustrated by the fact that at one point a few days ago, they had seven consecutive scores of 300 to their name. Look at their scores over the last few years, in fact, and it is clear that this is the greatest batting side in the history of one-day cricket – by a margin.

There have been stumbles in this World Cup, but in the bigger picture, those are outliers. If England have a bad day in the final and New Zealand play their A-game, England might even lose today. But if Captain Morgan’s men play their A-game, they will coast to victory. New Zealand does not have those gears. No other team in the world does – for now.

But one day, they will all have to learn to play like this.

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.




of

Encryption of IP for Simulation with IES

I'm sending encrypted HDL to a customer who will use Cadence IES for simulation and was wondering how I should go about the encryption.

Does IES support the IEEE's P1735 and if so, where can I find Cadence's public key for performing the encryption?

Or is there an alternative solution that I can use for encryption?




of

How do we use the concept of Save and Restore during real developing(debugging)???/

Hi All,

I'm trying to understand checkpoint concept. When I found save and restart concept in cdnshelp, There is just describing about "$save" and "xrun -r "~~~".

and I found also the below link about save restart and it saves your time.

But I can't find any benefits from my experiment from save&restart article( I fully agree..the article)

Ok, So I'v got some experiment  Here.

1. I declared $save and got the below result as I expected within the simple UVM code.

In UVM code...

$display("TEST1");
$display("TEST2");
$save("SAVE_TEST");
$display("TEST3");
$display("TEST4");

And I restart at "SAVE_TEST" point by xrun -r "SAVE_TEST", I've got the below log

xcelium> run
TEST3
TEST4

Ok, It's Good what I expected.(The concept of Save and Restore is simple: instead of re-initializing your simulation every time you want to run a test, only initialize it once. Then you can save the simulation as a “snapshot” and re-run it from that point to avoid hours of initialization times. It used to be inconvenient. I agree..)

2. But The Problem is that I can't restart with modified code. Let's see the below example.

I just modified TEST5 instead of "TEST3"

$display("TEST1");
$display("TEST2");
$save("SAVE_TEST");
$display("TEST5"); //$display("TEST3");
$display("TEST4");

and I rerun with xrun -r "SAVE_TEST", then I've got the same log

xcelium> run
TEST3
TEST4

There is no "TEST5". Actually I expected "TEST5" in the log.From here We know $save can't support partially modified code after $save. 

Actually, through this, we can approach to our goal about saving developing time. 

So I want to know Is there any possible way that instead of re-initializing our simulation every time we want to run a test, only initialize it once and keep developing(debugging) our code ?

If we do, Could you let me know the simple example?




of

How to run a regressive test and merge the ncsim.trn file of all test into a single file to view the waveform in simvision ?

Hi all,

         I want to know how to run a regressive test in cadence and merge all ncsim .trn file of each test case into a single file to view all waveform in simvision. I am using Makefile to invoke the test case.

         eg:-

               test0:

                     irun -uvm -sv -access +rwc $(RTL) $(INTER) $(PKG) $(TOP) $(probe) +UVM_VERBOSITY=UVM_MEDIUM +UVM_TESTNAME=test0

             test1:

                   irun -uvm -sv -access +rwc $(RTL) $(INTER) $(PKG) $(TOP) $(probe) +UVM_VERBOSITY=UVM_MEDIUM +UVM_TESTNAME=test1

          I just to call test0 followed by test1 or parallel both test and view the waveform for both tests case.

        I new to this tool and help me with it

                     





of

IC Packagers: The Different Types of Mirrors

I’m not talking about carnival funhouse mirrors, but rather the different options for mirroring symbols, vias, and bond fingers in your IC Package layout. The Allegro Package Designer Plus and SiP Layout tools have two distinct styles of m...(read more)



  • Allegro Package Designer

of

Different Extracted Capacitance Values of the Same MOM Cap Structures Obtained from Quantus QRC Filed Solver

Hello,

 

I am using Virtuoso 6.1.7.

 

I am performing the parasitic extraction of a MOM cap array of 32 caps. I use Quantus QRC and I enable field solver. I select “QRCFS” for field solver type and “High” for field solver accuracy. The unit MOM cap is horizontally and vertically symmetric. The array looks like the sketch below and there are no other structures except the unit caps:

Rationally speaking, the capacitance values of the unit caps should be symmetric with respect to a vertical symmetry axis that is between cap16 and cap17 (shown with dashed red line). For example,

the capacitance of cap1 should be equal to the capacitance of cap32

the capacitance of cap2 should be equal to the capacitance of cap31

etc. as there are no other structures around the caps that might create some asymmetry.

Nevertheless, what I observe is the following after the parasitic extraction:

As it can be seen, the result is not symmetric contrary to what is expected. I should also add that I do not observe this when I perform parasitic extraction with no filed solver.

Why do I get this result? Is it an artifact resulting from the field solver tool (my conclusion was yes but still it must be verified)? If not, how can something like this happen?

 

Many thanks in advance.

 

Best regards,

Can




of

Is there a simple way of converting a schematic to an s-parameter model?

Before I ask this, I am aware that I can output an s-parameter file from an SP analysis.

I'm wondering if there is a simple way of creating an s-parameter model of a component.

As an example, if I have an S-parameter model that has 200 ports and 150 of those ports are to be connected to passive components and the remaining 50 ports are to be connected to active components, I can simplify the model by connecting the 150 passive components, running an SP analysis, and generating a 50 port S-parameter file.

The problem is that this is cumbersome. You've got to wire up 50 PORT components and then after generating the s50p file, create a new cellview with an nport component and connect the 50 ports with 50 new pins.

Wiring up all of those port components takes quite a lot of time to do, especially as the "choosing analyses" form adds arrays in reverse (e.g. if you click on an array of PORT components called X<0:2> it will add X<2>, X<1>, X<0> instead of in ascending order) so you have to add all of them to the analyses form manually.

Is any way of taking a schematic and running some magic "generate S-Parameter cellview from schematic cellview"  function that automates the whole process?




of

Take Advantage of Advancements in Real Number Modeling and Simulation

Verification is the top challenge in mixed-signal design. Bringing analog and digital domains together into unified verification planning, simulating, and debugging is a challenging task for rapidly increasing size and complexity of mixed-signal designs. To more completely verify functionality and performance of a mixed-signal SoC and its AMS IP blocks used to build it, verification teams use simulations at transistor, analog behavioral and real-number model (RNM) and RTL levels, and combination of these.

In recent years, RNM and simulation is being adopted for functional verification by many, due to advantages it offers including simpler modeling requirements and much faster simulation speed (compared to a traditional analog behavioral models like Verilog-A or VHDL-AMS). Verilog-AMS with its wreal continue to be popular choice. Standardization of real number extensions in SystemVerilog (SV) made SV-RNM an even more attractive choice for MS SoC verification.

Verilog-AMS/wreal is scalar real type. SV-RNM offers a powerful ability to define complex data types, providing a user-defined structure (record) to describe the net value. In a typical design, most analog nodes can be modeled using a single value for passing a voltage (or current) from one module to another. The ability to pass multiple values over a net can be very powerful when, for example, the impedance load impact on an analog signal needs to be modeled. Here is an example of a user-defined net (UDN) structure that holds voltage, current, and resistance values:

When there are multiple drives on a single net, the simulator will need a resolution function to determine the final net value. When the net is just defined as a single real value, common resolution functions such as min, max, average, and sum are built into the simulator.  But definition of more complex structures for the net also requires the user to provide appropriate resolution functions for them. Here is an example of a net with three drivers modeled using the above defined structural elements (a voltage source with series resistance, a resistive load, and a current source):

To properly solve for the resulting output voltage, the resolution function for this net needs to perform Norton conversion of the elements, sum their currents and conductances, and then calculate the resolved output voltage as the sum of currents divided by sum of conductances.

With some basic understanding of circuit theory, engineers can use SV-RNM UDN capability to model electrical behavior of many different circuits. While it is primarily defined to describe source/load impedance interactions, its use can be extended to include systems including capacitors, switching circuits, RC interconnect, charge pumps, power regulators, and others. Although this approach extends the scope of functional verification, it is not a replacement for transistor-level simulation when accuracy, performance verification, or silicon correlation are required:  It simply provides an efficient solution for discretely modeling small analog networks (one to several nodes).  Mixed-signal simulation with an analog solver is still the best solution when large nonlinear networks must be evaluated.

Cadence provides a tutorial on EEnet usage as well as the package (EEnet.pkg) with UDN definitions and resolution functions and modeling examples. To learn more, please login to your Cadence account to access the tutorial.




of

Virtuosity: Can You Build Lego Masterpieces with All Blocks of One Size?

The way you need blocks of different sizes and styles to build great Lego masterpieces, a complex WSP-based design requires stitching together routing regions with multiple patterns that follow different WSSPDef periods. Let's see how you can achieve this. (read more)




of

જીવના જોખમે કામ કરતાં ડિલિવરી એજન્ટોને Amazon, BigBasket, Grofers અને MedLife ની સલામ

Amazon, BigBasket, Grofers અને MedLife, લોકડાઉન દરમિયાન જીવના જોખમે ડિલિવરી કરતા બહાદૂુરોને બિરદાવે છે.




of

100 years of Ray: ক্যামেরার সামনে সত্যজিৎ, সাড়ে সাত মিনিটের 'A Ray of Genius'




of

Amazon, BigBasket, Grofers এবং MedLife, COVID-19-এর লকডাউনের মধ্যে কর্মরত ডেলিভারি এজেন্টদের নির্ভীকতার প্রশংসা জানাচ্ছে




of

Apple Accused Of Crackdown On Jailbreaking






of

Linux Kernel Purged Of Five-Year-Old Root Access Bug







of

UCanCode Remote Code Execution / Denial Of Service

UCanCode has active-x vulnerabilities which allow for remote code execution and denial of service attacks.




of

Avaya IP Office (IPO) 10.1 Active-X Buffer Overflow

Avaya IP Office (IPO) versions 9.1.0 through 10.1 suffer from an active-x buffer overflow vulnerability.




of

Microsoft Windows 10 scrrun.dll Active-X Creation / Deletion Issues

scrrun.dll on Microsoft Windows 10 suffers from file creation, folder creation, and folder deletion vulnerabilities.




of

Bash Profile Persistence

This Metasploit module writes an execution trigger to the target's Bash profile. The execution trigger executes a call back payload whenever the target user opens a Bash terminal. A handler is not run automatically, so you must configure an appropriate exploit/multi/handler to receive the callback.




of

Google Chrome 80.0.3987.87 Denial Of Service

Google Chrome version 80.0.3987.87 heap-corruption remote denial of service proof of concept exploit.




of

Odin Secure FTP Expert 7.6.3 Site Info Denial Of Service

Odin Secure FTP Expert version 7.6.3 Site Info denial of service proof of concept exploit.




of

FlashFXP 4.2.0 Build 1730 Denial Of Service

FlashFXP version 4.2.0 build 1730 denial of service proof of concept exploit.




of

Nsauditor 3.2.0.0 Denial Of Service

Nsauditor version 3.2.0.0 denial of service proof of concept exploit.




of

Product Key Explorer 4.2.2.0 Denial Of Service

Product Key Explorer version 4.2.2.0 Key denial of service proof of concept exploit.




of

Frigate 3.3.6 Denial Of Service

Frigate version 3.3.6 denial of service proof of concept exploit.




of

UltraVNC Launcher 1.2.4.0 Denial Of Service

UltraVNC Launcher version 1.2.4.0 Password denial of service proof of concept exploit.




of

UltraVNC Viewer 1.2.4.0 Denial Of Service

UltraVNC Viewer version 1.2.4.0 VNCServer denial of service proof of concept exploit.




of

UltraVNC Launcher 1.2.4.0 Denial Of Service

UltraVNC Launcher version 1.2.4.0 RepeaterHost denial of service proof of concept exploit.




of

SpotAuditor 5.3.4 Denial Of Service

SpotAuditor version 5.3.4 Name denial of service proof of concept exploit.




of

ZOC Terminal 7.25.5 Denial Of Service

ZOC Terminal version 7.25.5 denial of service proof of concept exploit.




of

dnsmasq-utils 2.79-1 Denial Of Service

dnsmasq-utils version 2.79-1 dhcp_release denial of service proof of concept exploit.




of

ZOC Terminal 7.25.5 Denial Of Service

ZOC Terminal version 7.25.5 Script denial of service proof of concept exploit.




of

Amcrest Dahua NVR Camera IP2M-841 Denial Of Service

Amcrest Dahua NVR Camera IP2M-841 denial of service proof of concept exploit.