ive

[Softball] Softball Game Against Clarke University Postponed




ive

[Softball] Langston University Softball Defeats Haskell




ive

[Volleyball] Haskell takes a loss at home verses University of St. Mary

Haskell loses in head to head battle in fourth set on September 24, 2019. 




ive

[Volleyball] Volleyball falls to Lincoln Christian University

Haskell falls short to advance in pool play and secure automatic bid to represent A.I.I. Conference in nationals. 




ive

[Women's Outdoor Track & Field] Five top-5 finishes in Long Jump, Shot Put and Javelin.

Ottawa, Kansas - The Haskell Indian Nations University Women's track and field teams competed at the Ottawa Braves Invitational on Saturday.




ive

Adding Brady pays with five prime-time games for Bucs

The Bucs have never had more than four prime-time games in a year, but in 2020 they'll face the Bears, Raiders, Giants, Saints and Rams in prime time.




ive

[Women's Basketball] Loss to Wilberforce University in Conference Play Ends Women's Basketball ...




ive

[Haskell Indians] Haskell Athletics Cancels Spring Seasons Effective Immediately




ive

AMBA Adaptive Traffic Profiles: Addressing The Challenge

Modern systems-on-a-chip (SoCs) continue to increase in complexity, adding more components and calculation power to accommodate new performance-hungry applications such as machine learning and autonomous driving.  With increased number of SoC components, such as CPUs, GPUs, accelerators and I/O devices, comes increased demand to correctly model interoperability of various components. Traditional simulation of complex systems requires accurate models of all components comprising the system and normally results in very long simulation times. A better way is to create a set of typical traffic profiles which describe behavior of system’s masters and slaves. Such profiles should be abstract to be applied to various protocols and interfaces and be portable to be applied throughout different SoC design and verification cycles.

To address the challenges outlined above, Arm has recently announced availability of the AMBA® Adaptive Traffic Profiles (AMBA ATP) specification which lays foundation of a new synthetic traffic framework. The AMBA ATP specification includes detailed information of various transaction types and timing characteristics of those transactions. The traffic profiles defined in the specification are abstract in nature and thus could be used to generate stimuli for various standard AMBA protocols and in various environments such as RTL-based simulation, FPGA prototyping and final SoC verification. The traffic profiles outlined in the specification include a set of parameters to define timing relationships between transactions as well as timing relationships within individual transactions. Even though the traffic profile represents the behavior of a single agent it could be applied either in a concurrent manner (e.g. write and read traffic profiles running in parallel) or in a sequential manner (e.g. when one traffic completes before the next one start). Moreover, when simulating a reasonably complex system, it is possible to coordinate traffic profiles generated by multiple components. While providing abstract definition of traffic profiles, the AMBA ATP specification focuses on the use of traffic profiles with an AMBA AXI interface, outlining signaling, timing relationships between different transaction phases and between different transactions. The same application principles could be used to map the abstract traffic profiles to other AMBA protocols such as AMBA5 CHI protocol.  

To facilitate adoption of the AMBA Adaptive Traffic Profiles, Cadence has recently announced availability of SystemVerilog UVM ATP Sequence Layer which automatically implements mapping of an abstract ATP traffic to AMBA protocol specific traffic, generated by Cadence AMBA Verification IP. The ATP layer is implemented as a SystemVerilog UVM virtual sequence with the sequence item including all ATP transaction parameters as defined in the specification.

Using the provided sequence infrastructure, users can write tests to define and coordinate traffic profiles for various components in the system. The ATP Layer automatically converts the abstract traffic profile into AMBA protocol-specific traffic, e.g., AMBA5 CHI protocol traffic.

 A sample code below, shows an example of a read profile translated by Cadence ACE Verification IP in ACE protocol traffic.

   `uvm_do_with(ace_atp_vseq,                                            

                       {ace_atp_vseq.agentId == agent_id;                                // ATP agent id

                        ace_atp_vseq.atpDirection == ATP_READ;                    // direction of bursts issued by virtual sequence

                        ace_atp_vseq.startAddress == start_address;                // start of address range being accessed

                        ace_atp_vseq.endAddress == end_address;                  // end of address range being accessed

                        ace_atp_vseq.atpDomain == atp_domain;                      // domain to use for transactions

                        ace_atp_vseq.addressPattern == ATP_SEQUENTIAL;  // address pattern

                        ace_atp_vseq.transactionSize == 64;                             // number of bytes in each burst

                        ace_atp_vseq.dataSize == 4;                                          // number of bytes in each transfer

                        ace_atp_vseq.rate == 150.0/(50.0);                                // requestedBandwidth / clkFrequency

                        ace_atp_vseq.start == ATP_EMPTY;                              // start condition of the ATP FIFO

                        ace_atp_vseq.full == 128;                                               // full level of the ATP FIFO

                        ace_atp_vseq.numOfTransactions == 500;                    // number of bursts issued by this sequence

                        ace_atp_vseq.ARTV == 2;                                              // sub-transaction delay

                        ace_atp_vseq.RBR == 3;                                                // sub-transaction delay

                       });

In addition to the ATP Layer for Cadence Simulation-Based AMBA Verification IP, Cadence supports the ATP functionality in Acceleration-Based AMBA Verification IP. For detailed information about ATP support in Cadence Simulation-Based and Acceleration-Based Verification IP, visit ip.cadence.com.




ive

The Hard Edges of Modern Lives

This new film is the latest remake of Devdas, but what is equally interesting is the fact that it is in conversation with films made in the West. Unlike Bhansali’s more spectacular version of the older story, Anurag Kashyap’s Dev.D is a genuine rewriting of Sarat Chandra’s novel. Kashyap doesn’t flinch from depicting the individual’s downward spiral, but he also gives women their own strength. He has set out to right a wrong—or, at least, tell a more realistic, even redemptive, story. If these characters have lost some of the affective depth of the original creations, they have also gained the hard edges of modern lives.

We don’t always feel the pain of Kashyap’s characters, but we are able to more readily recognize them. Take Chandramukhi, or Chanda, who is a school-girl humiliated by the MMS sex-scandal. Her father, protective and patriarchal, says that he has seen the tape and thinks she knew what she was doing. “How could you watch it?” the girl asks angrily. And then, “Did you get off on it?” When was the last time a father was asked such a question on the Hindi screen? With its frankness toward sex and masturbation, Dev.D takes a huge step toward honesty. In fact, more than the obvious tributes to Danny Boyle’s Trainspotting, or the over-extended psychedelic adventure on screen, in fact, as much as the moody style of film-making, the candour of such questions make Dev.D a film that is truly a part of world cinema.

Rave Out © 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




ive

DAC 2015: Jim Hogan Warns of “Looming Crisis” in Automotive Electronics

EDA investor and former executive Jim Hogan is optimistic about automotive electronics, but he has some concerns as well. At the recent Design Automation Conference (DAC 2015), he delivered a speech titled “The Looming Quality, Reliability, and Safety Crisis in Automotive Electronics...Why is it and what can we do to avoid it?"

Hogan gave the keynote speech for IP Talks!, a series of over 30 half-hour presentations located at the ChipEstimate.com booth. Presenters included ARM, Cadence, eSilicon, Kilopass, Sidense, SilabTech, Sonics, Synopsys, True Circuits, and TSMC. Held in an informal setting, the talks addressed the challenges faced by SoC design teams and showed how the latest developments in semiconductor IP can contribute to design success.

Jim Hogan delivers keynote speech at DAC 2015 IP Talks!

Hogan talked about several phases of automotive electronics. These include assisted driving to avoid collisions, controlled automation of isolated tasks such as parallel parking, and, finally, fully autonomous vehicles, which Hogan expects to see in 15 to 20 years. The top immediate priorities for automotive electronics designers, he said, will be government regulation, fuel economy, advanced safety, and infotainment.

More Code than a Boeing 777

According to Hogan, today’s automobiles use 50-100 microcontrollers per car, resulting in a worldwide automotive semiconductor market of around $40 billion. The global market for advanced automotive electronics is expected to reach $240 billion by 2020. Software is growing faster in the automotive market than it is in smartphones. Hogan quoted a Ford vice president who observed that there are more lines of code in a Ford Fusion car than a Boeing 777 airplane.

One unique challenge for automotive electronics designers is long-term reliability. This is because a typical U.S. car stays on the road for 15 years, Hogan said. Americans are holding onto new vehicles for a record 71.4 months.

Another challenge is regulatory compliance. Aeronautics is highly regulated from manufacturing to air traffic control, and the same will probably be true of automated cars. Hogan speculated that the Department of Transportation will be the regulatory authority for autonomous cars. Today, automotive electronics providers must comply with the ISO26262 automotive functional safety specification.

So where do we go from here? “We’ve got to change our mindset,” Hogan said. “We’ve got to focus on safety and reliability and demand a different kind of engineering discipline.” You can watch Hogan’s entire presentation by clicking on the video icon below, or clicking here. You can also watch other IP Talks! videos from DAC 2015 here.

https://youtu.be/qL4kAEu-PNw

 

Richard Goering

Related Blog Posts

DAC 2015: See the Latest in Semiconductor IP at “IP Talks!”

Automotive Functional Safety Drives New Chapter in IC Verification




ive

EDA Retrospective: 30+ Years of Highlights and Lowlights, and What Comes Next

In 1985, as a relatively new editor at Computer Design magazine, I was asked to go forth and cover a new business called CAE (computer-aided engineering). I knew nothing about it, but I had been writing about design for test, so there seemed to be somewhat of a connection. Little did I know that “CAE” would turn into “EDA” and that I’d write about it for the next 30 years, for Computer Design, EE Times, Cadence, and a few others.

Now that I’m about to retire, I’m looking back over those 30 years. What a ride it has been! By the numbers I covered 31 Design Automation Conferences (DACs), hundreds of new products, dozens of acquisitions and startups, dozens of lawsuits, and some blind alleys that didn’t work out (like “silicon compilation”). Chip design went from gate arrays and PLDs with a few thousand gates to processors and SoCs with billions of transistors.

In 1985 there were three big CAE vendors – Daisy Systems, Mentor Graphics, and Valid Logic. All sold bundled packages that included workstations and CAE software; in fact, Daisy and Valid designed and manufactured their own workstations. In the early 1980s a workstation with schematic capture and gate-level logic simulation might have set you back $120,000. In 1985 OrCAD, now part of Cadence, came out with a $500 schematic capture package running on IBM PCs.

Cadence and Synopsys emerged in the late 1980s, and by the 1990s the EDA industry was pretty much a software-only business (apart from specialized machines like simulation accelerators). Since the early 1990s the “big three” EDA vendors have been Cadence, Synopsys, and Mentor, giving the industry stability but allowing for competition and innovation.

Here, in my view, are some of the highlights that occurred during the past 30 years of EDA.

EDA is a Highlight

The biggest highlight in EDA is the existence of a commercial EDA industry! Marching hand in hand with the fabless semiconductor revolution, commercial EDA made it possible for hundreds of companies to design semiconductors, as opposed to a small handful that could afford large internal CAD operations and fabs. With hundreds of semiconductor companies as opposed to a half-dozen, there’s a lot more creativity, and you get the level of sophistication and intelligence that you see in your smartphone, video camera, tablet, gaming console, and car today.

CAE + CAD = EDA. This is not just a terminology issue. By the mid-1980s it became clear that front-end design (CAE) and physical design (CAD) belonged together. The big CAE vendors got involved in IC and PCB CAD, and presented increasingly integrated solutions. People got tired of writing “CAE/CAD” and “EDA” was born.

The move from gate-level design to RTL. This move happened around 1990, and in my view this is EDA’s primary technology success story during the past 30 years. Moving up in abstraction made the design and verification of much larger chips possible. Going from gate-level schematics to a hardware description language (HDL) revolutionized logic design and verification. Which would you rather do – draw all the gates that form an adder, or write a few lines of code and let a synthesis tool find an adder in your chosen technology?

Two developments made this shift in design possible. One was the emergence of commercial RTL synthesis (or “logic synthesis”) tools from Synopsys and other companies, which happened around 1990. Another was the availability of Verilog, developed by Gateway Design Automation and purchased by Cadence in 1989, as a standard RTL HDL. Although most EDA vendors at the time were pushing VHDL, designers wanted Verilog and that’s what most still use (with SystemVerilog coming on strong in the verification space).

IC functional verification underwent huge changes in the late 1990s and early 2000s, largely due to new technology developed by Verisity, which was acquired by Cadence in 2005. Before Verisity, verification engineers were writing and running directed tests in an ad-hoc manner. Verisity introduced or improved technologies such as pseudo-random test generation, coverage metrics, reusable verification IP, and semi-automated verification planning. The Verisity “e” language became a widely used hardware verification language (HVL).

The biggest way that EDA has expanded its focus has been through semiconductor IP. Today Synopsys and Cadence are leading providers in this area. Thanks to the availability of design and verification IP, many SoC designs today reuse as much as 80% of previous content. This makes it much, much faster to design the remaining portion. While IP began with fairly simple elements, today commercially available IP can include whole subsystems along with the software that runs on them. With IP, EDA vendors are providing not only design tools but design content.

Finally, the EDA industry has done an amazing job of keeping up with SoC complexity and with advanced process nodes. Thanks to intense and early collaboration between foundries, IP, and EDA providers, tools and IP have been ready for process nodes going down to 10nm.

Where Does ESL Fit?

In some ways, electronic system level (ESL) design is both a lowlight and a highlight. It’s a lowlight because people have been talking about it for 30 years and the acceptance and adoption have come very slowly. ESL is a highlight because it’s finally starting to happen, and its impact on design and verification flows could be dramatic. Still, ESL is vaguely defined and can be used to describe almost anything that happens at a higher abstraction level than RTL.

High-level synthesis (HLS) is an ESL technology that is seeing increasing use in production environments. Current HLS tools are not restricted to datapaths, and they produce RTL code that gives better quality of results than hand-written RTL. Another ESL methodology that’s catching on is virtual prototyping, which lets software developers write software pre-silicon using SystemC models. Both HLS and virtual prototyping are made possible by the standardization of SystemC and transaction-level modeling (TLM). However, it’s still not easy to use the same SystemC code for HLS and virtual prototyping.

And Now, Some Lowlights

Every new industry has some twists and turns, and EDA is no exception. For example, the EDA industry in the 1980s and 1990s sparked a lot of lawsuits. At EE Times my colleagues and I wrote a number of articles about EDA legal disputes, mostly about intellectual property, trade secrets, or patent issues. Over the past decade, fortunately, there have been far fewer EDA lawsuits than we had before the turn of the century.

Another issue that was troublesome in the 1980s and 1990s was so-called “standards wars.” These would occur as EDA vendors picked one side or the other in a standards dispute. For example, power intent formats were a point of conflict in the early 2000s, but the Common Power Format (CPF) and the Unified Power Format (UPF) are on the road to convergence today with the IEEE 1801 effort. As mentioned previously, Verilog and VHDL were competing for adoption in the early 1990s. For the most part, Verilog won, showing that the designer community makes the final decision about which standards will be used.

How on earth did there get to be something like 30 DFM (design for manufacturability) companies 10-12 years ago? To my knowledge, none of these companies are around today. A few were acquired, but most simply faded away. A lot of investors lost money. Today, VCs and angel investors are funding very few EDA or IP startups. There are fewer EDA startups than there used to be, and that’s too bad, because that’s where a lot of the innovation comes from.

Here’s another current lowlight -- not enough bright engineering or computer science students are joining EDA companies. They’re going to Google, Apple, Facebook, and the like. EDA is perceived as a mature industry that is still technically very difficult. We need to bring some excitement back into EDA.

Where Is EDA Headed?

Now we come to what you might call “headlights” and look at what’s coming. My list includes:

  • System Design Enablement. This term has been coined by Cadence to describe a focus on whole systems or end products including chips, packages, boards, embedded software, and mechanical components. There are far more systems companies than semiconductor companies, leaving a large untapped market that’s looking for solutions.
  • New frontiers for EDA. At a 2015 Design Automation Conference speech, analyst Gary Smith suggested that EDA can move into markets such as embedded software, mechanical CAD, biomedical, optics, and more.
  • Vertical markets. EDA has until now been “horizontal,” providing the same solution for all market segments. Going forward, markets like consumer, automotive, and industrial will have differing needs and will need optimized tools and IP.
  • Internet of Things. This is a current buzzword, but the impact on EDA remains uncertain. Many IoT devices will be heavily analog, use mature process nodes, and be dirt cheap. Lip-Bu Tan, Cadence CEO, recently pointed out that the silicon percentage of IoT revenue will be small and that a lot of the profits will be on the service side.

Moving On

For the past six years I’ve been writing the Industry Insights blog at Cadence.com. All things change, and with this post comes a farewell – I am retiring in late June and will be pursuing a variety of interests other than EDA. I’ll be watching, though, to see what happens next in this small but vital industry. Thanks for reading!

Richard Goering

 




ive

Can Voltus do an IR drop analysis on a negative supply?

I have been using Voltus to do IR drop analysis but I got caught on one signal. It is negative. When I use:

set_pg_nets -net negsupply -voltage -5 -threshold -4.5 -package_net_name NEGSUP -force

Voltus dies with a backtrace. Looking at the beginning of the trace you see it suggests that the problem is it set maximum to -5 and minimum to 0. Is there another way to express a negative voltage supply for IR drop analysis?




ive

Distortion Summary in New CDNLive YouTube Video and at IEEE IMS2014 Next Week!

Hi Folks, Check out this great new video on YouTube: CDNLive SV 2014: PMC Improves Visibility and Performance with Spectre APS In this video from CDNLive Silicon Valley 2014, Jurgen Hissen, principal engineer, MSCAD, at PMC, discusses an aggressive...(read more)




ive

Celebrating Five Years of Performance-Optimized Arm-Based SoCs: Now including AMBA5

It’s been quite a long 5-year journey building and deploying Performance Analysis, Verification, and Debug capabilities for Arm-based SoCs. We worked with some of the smartest engineers on the planet. First with the engineers at Arm, with whom we...(read more)




ive

Portable Stimulus User Gives Perspec PSS Technology Nearly Perfect Review

It’s always good to hear what real users think of products. Here is a very detailed review (~4000 words) by an Anonymous user, nick named Ant-Man (from the movie). Overall it’s a very strong endorsement of Perspec, and summarize...(read more)




ive

What’s Hot in Verification at this Year’s CDNLive? It’s Portable Stimulus Again!

CDNLive is a user conference, and verification is one of the largest categories of content with multiple tracks covering multiple days. Portable stimulus is one of the hottest new areas in verification, and continues to be popular in all venues. At l...(read more)




ive

RAMAC Park and the Origin of the Disk Drive

Did you know that there is a park in San Jose named after a disk drive? Actually, technically it is named after the first computer that used disk drives. You couldn't just go and buy a disk drive...

[[ Click on the title to access the full blog on the Cadence Community site. ]]




ive

Automotive Security in the World of Tomorrow - Part 1 of 2

Autonomous vehicles are coming. In a statistic from the U.S. Department of Transportation, about 37,000 people died in car accidents in the United States in 2018. Having safe, fully automatic vehicles could drastically reduce that number—but the trick is figuring out how to make an autonomous vehicle safe. Internet-enabled systems in cars are more common than ever, and it’s unlikely that the use of them will slow or stop—and while they provide many conveniences to a driver, they also represent another attack surface that a potential criminal could use to disable a vehicle while driving.

So—what’s being done to combat this? Green Hills Software is on the case, and they explained the landscape of security in automotive systems in a presentation given by Max Hinson in the Cadence Theater at DAC 2019. They have software embedded [FS1] in most parts of a car, and all the major OEMs use their tech. The challenge they’ve taken on is far from a simple one—between the sheer complexity of modern automotive computer systems, safety requirements like the ISO 26262 standard, and the cost to develop and deploy software, they’ve got their work cut out for them. It’s the complexity of the systems that represents the biggest challenge, though. The autonomous cars of the future have dynamic behaviors, cognitive networks, require security certification to at least ASIL-D, require cyber security like you’d have on an important regular computer system to cover for the internet-enabled systems—and all of this comes with a caveat: under current verification abilities, it’s not possible to test every test case for the autonomous system. You’d be looking at trillions of test cases to reach full coverage—not even the strongest emulation units can cover that today.

With regular cars, you could do testing with crash-test dummies, and ramming the car into walls at high speeds in a lab and studying the results. Today, though, that won’t cut it. Testing like that doesn’t see if a car has side-channel vulnerabilities in its infotainment system, or if it can tell the difference between a stop sign and a yield sign. While driving might seem simple enough to those of us that have been doing it for a long time, to a computer, the sheer number of variables is astounding. A regular person can easily filter what’s important and what’s not, but a machine learning system would have to learn all of that from scratch. Green Hills Software posits that it would take nine billion miles of driving for a machine learning system of today’s caliber to reach an average driver’s level—and for an autonomous car, “average” isn’t good enough. It has to be perfect.

A certifier for autonomous vehicles has a herculean task, then. And if that doesn’t sound hard enough, consider this: in modern machine-vision systems, something called the “single pixel hack” can be exploited to mess them up. Let’s say you have a stop sign, and a system designed to recognize that object as a stop sign. Randomly, you change one pixel of the image to a different color, and then check to see if the system still recognizes the stop sign. To a human, who knows that a stop sign is octagonal, red, and has “STOP” written in white block letters, a stop sign that’s half blue and maybe bent a bit out of shape is still, obviously, a stop sign—plus, we can use context clues to ascertain that sign at an intersection where there’s a white line on the pavement in front of our vehicle probably means we should stop. We can do this because we can process the factors that identify a stop sign “softly”—it’s okay if it’s not quite right; we know what it’s supposed to be. Having a computer do the same is much more difficult. What if the stop sign has graffiti on it? Will the system still recognize it as a stop sign? How big of an aberration needs to be present before the system no longer acknowledges the mostly-red, mostly-octagonal object that might at one point have had “stop” written on it as a stop sign? To us, a stop sign is a stop sign, even with one pixel changed—but change it in the right spot, and the computer might disagree.

The National Institute of Security and Technology tracks vulnerabilities along those lines in all sorts of systems; by their database, a major vulnerability is found in Linux every three days. And despite all our efforts to promote security, this isn’t a battle we’re winning right now—the number of vulnerabilities is increasing all the time.

Check back next time to see the other side: what does Green Hills Software propose we do about these problems? Read part 2 now.




ive

Automotive Security in the World of Tomorrow - Part 2 of 2

If you missed the first part of this series, you can find it here.

So: what does Green Hills Software propose we do?

The issue of “solving security” is, at its core, impossible—security can never be 100% assured. What we can do is make it as difficult as possible for security holes to develop. This can be done in a couple ways; one is to make small code in small packs executed by a “safing plan”—having each individual component be easier to verify goes a long way toward ensuring the security of the system. Don’t have sensors connect directly to objects—instead have them output to the safing plan first, which can establish control and ensure that nothing can be used incorrectly or in unintended ways. Make sure individual software components are sufficiently isolated to minimize the chances of a side-channel attack being viable.

What all of these practices mean, however, is that a system needs to be architected with security in mind from the very beginning. Managers need to emphasize and reward secure development right from the planning stages, or the comprehensive approach required to ensure that a system is as secure as it can be won’t come together. When something in someone else’s software breaks, pay attention—mistakes are costly, but only one person has to make it before others can learn from it and ensure it doesn’t happen again. Experts are experts for a reason—when an independent expert tells you something in your design is not secure, don’t brush them off because the fix is expensive. This is what Green Hills Software does, and it’s how they ensure that their software is secure.

Now, where does Cadence fit into all of this? Cadence has a number of certified secure offerings a user can take advantage of when planning their new designs. The Tensilica portfolio of IP is a great way to ensure basic components of your design are foolproof. As always, the Cadence Verification Suite is great for security verification in both simulation and emulation, and JasperGold platform’s formal apps are a part of that suite as well.

We are entering a new age of autonomous technology, and with that new age we have to update our security measures to match. It’s not good enough to “patch up” security at the end—security needs to beat the forefront of a verification engineer or hardware designer’s mind at all stages of development. For a lot of applications, quite literally, lives are at stake. It’s uncharted territory out there, but with Green Hills Software and Cadence’s tools and secure IP, we can ensure the safety of tomorrow.




ive

RAK Attack: Better Driver Tracing, Faster Palladium Build Time, UVM Register Map Automation

Looking to learn? There's a bunch of new RAKs (Rapid Adoption Kits) available online now!

1) Indago 19.09 Better Driver Tracing and More

Are you new to Indago and not sure where to start? Luckily, there’s a new Rapid Adoption Kit for you: the Indago 19.09 Overview RAK! This neat package contains everything you need to get your debugging started through Indago. In four short labs, plus a brief introductory lab, you’ll have all the basics of Indago 19.09 down—the Indago working environment, the SmartLog, how Indago interacts with the rest of the Cadence Verification Suite, and how Indago uses HDL driver tracing.

Lab 1 discusses the various debugging tools included in Indago and teaches you how to customize your Indago windows and environment settings. Lab 2 covers the SmartLog feature and talks about analyzing and filtering its messages to suit your needs, as well as how to interact with the waveform marker. Lab 3 is an interactive Indago debugging experience—it’ll walk you through how to use Indago and its features in an actual working environment: setting breakpoints, using simulator commands in the Indago console, toolbars, switches, and more. Lab 4 is all things HDL tracing—recording debug data, an introduction to debug assertions, waveform visualizations, driving expression analysis, and single-step driver tracing, among other things.

Interested? Check out the RAK here.

2) IXCOM MSIE: Faster Palladium Build Time

Got several testbenches you want to compile with the same DUT and tests and you want to do it fast? With IXCOM, all you have to do to compile those different testbenches is use the xrun command for each after compiling your DUT. But what exactly is IXCOM, and how does one start using it? This quick RAK can help—here, you’ll learn the basics of using MSIE features with IXCOM, complete with an example to get you started. Using MSIE can vastly improve your build times with Palladium and using IXCOM is the best way to shrink that tedious rebuild time as small as it can get. Check out this RAK here.

3)  JasperGold Control and Status Register Verification App Automates UVM Register Map Verification

New to the JasperGold Control and Status Register (CSR) Verification App for your UVM testbenches? Don’t worry; there’s a RAK for that! This eponymous RAK can get you up and running with this in no time, helping you automate your checks from UVM register map specs. With this RAK, you’ll learn the basics of the JasperGold CSR, how to use JasperGold CSR’s Proof Accelerator, and more. CSR features a model-based approach to predicting a register’s expected value, supports pipeline interfaces, all IP-XACT access policies, and it can fully model any expected register value. It also supports register aliases, read and write semantics, and separate read/write data latencies in any given field.

If this functionality sounds up your alley, you can take a look at this RAK here.




ive

BoardSurfers: Allegro In-Design IR Drop Analysis: Essential for Optimal Power Delivery Design

All PCB designers know the importance of proper power delivery for successful board design. Integrated circuits need the power to turn on, and ICs with marginal power delivery will not operate reliably. Since power planes can...(read more)




ive

BoardSurfers: Five Easy Steps to Create Footprints Using Packages in Library Creator

In my previous blog, I talked about creating a footprint using an existing template in Allegro ECAD-MCAD Library Creator and explained how easily you can access an existing template and create a package from it by just clicking a button. In this blog...(read more)




ive

New Rapid Adoption Kit (RAK) Enables Productive Mixed-Signal, Low Power Structural Verification

All engineers can enhance their mixed-signal low-power structural verification productivity by learning while doing with a PIEA RAK (Power Intent Export Assistant Rapid Adoption Kit). They can verify the mixed-signal chip by a generating macromodel for their analog block automatically, and run it through Conformal Low Power (CLP) to perform a low power structural check.  

The power structure integrity of a mixed-signal, low-power block is verified via Conformal Low Power integrated into the Virtuoso Schematic Editor Power Intent Export Assistant (VSE-PIEA). Here is the flow.

 

Applying the flow iteratively from lower to higher levels can verify the power structure.

Cadence customers can learn more in a Rapid Adoption Kit (RAK) titled IC 6.1.5 Virtuoso Schematic Editor XL PIEA, Conformal Low Power: Mixed-Signal Low Power Structural Verification.

The RAK includes Rapid Adoption Kit with demo design (instructions are provided on how to setup the user environment). It Introduces the Power Intent Export Assistant (PIEA) feature that has been implemented in the Virtuoso IC615 release.  The power intent extracted is then verified by calling Conformal Low Power (CLP) inside the Virtuoso environment.

  • Last Update: 11/15/2012.
  • Validated with IC 6.1.5 and CLP 11.1

The RAK uses a sample test case to go through PIEA + CLP flow as follows:

  • Setup for PIEA
  • Perform power intent extraction
  • CPF Import: It is recommended to Import macro CPF, as oppose to designing CPF for sub-blocks. If you choose to import design CPF files please make sure the design CPF file has power domain information for all the top level boundary ports
  • Generate macro CPF and design CPF
  • Perform low power verification by running CLP

It is also recommended to go through older RAKs as prerequisites.

  • Conformal Low Power, RTL Compiler and Incisive: Low Power Verification for Beginners
  • Conformal Low Power: CPF Macro Models
  • Conformal Low Power and RTL Compiler: Low Power Verification for Advanced Users

To access all these RAKs, visit our RAK Home Page to access Synthesis, Test and Verification flow

Note: To access above docs, use your Cadence credentials to logon to the Cadence Online Support (COS) web site. Cadence Online Support website https://support.cadence.com/ is your 24/7 partner for getting help and resolving issues related to Cadence software. If you are signed up for e-mail notifications, you can receive new solutions, Application Notes (Technical Papers), Videos, Manuals, and more.

You can send us your feedback by adding a comment below or using the feedback box on Cadence Online Support.

Sumeet Aggarwal




ive

New Incisive Low-Power Verification for CPF and IEEE 1801 / UPF

On May 7, 2013 Cadence announced a 30% productivity gain in the June 2013 Incisive Enterprise Simulator 13.1 release.  Advanced debug visualization, faster turn-around time, and the extension of eight years of low-power verification innovation to IEEE 1801/UPF are the key capabilities in the release.

When we talk about low-power verification its easy to equate it with simulation.  For certain, simulation is the heart of a low-power verification solution. Simulation enables engineers to run their design in the context of power intent.  The challenge is that a simulation-only approach is inadequate. For example, if engineers could achieve SoC quality by verifying the individual function of each power control module (PCM), then simulation could be enough.  For a single power domain, simulation can be enough. 

However, when the SoC has multiple power domains -- and we have seen SoCs with hundreds of them -- engineers have to check the PCMs and all of the arcs between the power modes.  These SoCs often synchronize some of the domain switching to reduce overall complexity, creating the potential for signal skew errors on the control signals for the connected domains.  Managing these complexities requires verification methodologies including advanced debug, verification planning, assertion-based verification, Universal Verification Methodology - Low Power (UVM-LP), and more (see Figure 1).

 

Figure 1:  Comprehensive Low-Power Verification 

But even advanced verification methodologies on top of simulation aren't enough.  For example, the state machine that defines the legal and illegal power mode transitions is often written in software. The speed and capacity of the Palladium emulation platform is ideal to verify in this context, and it is integrated with simulation sharing debug, UVM acceleration, and static checks for low-power. And, it reports verification progress into a holistic plan for the SoC.  Another example is the ability to compare the design in the implementation flow with the design running in simulation to make sure that what we verify is what we intend to build.

Taken together, verification across multiple engines provides the comprehensive low-power verification needed for today's advanced node SoCs.  That's the heart of this low-power verification announcement. 

Another point you may have noticed is the extension of the Common Power Format (CPF) based power-aware support in the Incisive Enterprise Simulator to IEEE 1801.  We chose to bring IEEE 1801 to simulation first because users like you sometimes need to mix vendors for regression flows.  Over time, Cadence will extend the low-power capabilities throughout its product suite to IEEE 1801.

If you are using CPF today, you already have the best low-power solution. The evidence is clear:  the upcoming IEEE 1801-2013 update includes many of the CPF features contributed to 1801/UPF to enable methodology convergence.  Since you already have those features in the CPF flow, any migration before you have a mature IEEE 1801-2013 tool flow would reduce the functionality you have today.

If you are using Unified Power Format (UPF) 1.0 today, you want to start planning your move toward the IEEE 1801-2013 standard.  A good first step would be to move to the IEEE 1801-2009 standard.  It fills holes in the earlier UPF 1.0 definition.  While it does lack key features in -2013, it is an improvement that will make the migration to -2013 easier. The Incisive 13.1 release will run both UPF 1.0 and IEEE 1801-2009 power intent today.

Over the next few weeks you'll see more technical blogs about the low-power capabilities coming in the Incisive 13.1 release.  You can also join us on June 19 for a webinar that will introduce those capabilities using the reference design supplied with the Incisive Enterprise Simulator release.

=Adam "The Jouler" Sherer

(Yes, "Sherilog" is still here.  :-) )




ive

ST Microelectronics Success with IEEE 1801 / UPF Incisive Simulation - Video

ST Microelectronics reported their success with IEEE 1801 / UPF low-power simulation using Incisive Enterprise Simulator at CDNLive India in November 2013. We were able to meet with Mohit Jain just after his presentation and recorded this video that explains the key points in his paper.

With eight years of experience and pioneering technology in native low-power simulation, Mohit was able to apply Incisive Enterprise Simulator to a low-power demonstrator in preparation for use with a production set-top box chip.  Mohit was impressed with the ease in which he was able to reuse his existing IEEE 1801 / UPF code successfully, including the power format files and the macro models coded in his Liberty files. Mohit also discusses how he used the power-aware Cadence SimVision debugger.

The Cadence low-power verification solution for IEEE 1801 / UPF also incorporates the patent-pending Power Supply Network visualization in the SimVision debugger.  You can learn more about that in the Incisive low-power verification Rapid Adoption Kit for IEEE 1801 / UPF here in Cadence Online Support.

Just another happy Cadence low-power verification user!

Regards,

 Adam "The Jouler" Sherer 




ive

gm of an active mixer

Hi all,

What is the most accurate way to simulate the gm of  RF transistors (RF stage) of an active mixer (single balanced or Gilbert cell)?

I tried to simulate it with many ways such as:

1. DC annotation (but of course its incorrect due to the switching operation of the mixer)

2. d(i_ds)/d(v_gs) using HB analysis and then taking the value at zero (since it is a DC characteristic). In this way I chose in the simulator results of HB: Voltage, spectrum, rms, magnitude. 

3. Using the OP, OPT buttons in the calculator and then extracting the gm of the transistor. 

The problem is that each way gives a different value which makes the procedure of designing an active mixer very difficult. In addition, when I simulate the voltage conversion gain of the active mixer and trying to compare it to the formula (2/pi)*gm*RL (either in linear or dB), I get numbers which are way too far from simulations. I understand that I would not get the same results but not different by hundreds percent. 

I see in many publications that people are plotting graphs of mixer's gm vs. different parameters and starting to doubt whether the results are correct.

I would appreciate any help,

Thanks in advance




ive

LNA output noise floor at receiver front end.

Hi,

i am designing a broadband (100 MHz -6 GHz BW) receiver chain for  radar/rcs measurement tester. i will put Low noise amplifier after antenna input followed by mixed(10 MHz IF BW and digitizer.

I am facing problem regarding LAN. bandwidth of LAN is  approx 6 GHz(100 MHz-6GHz), gain 25-35 dB, with NF less than 2. I am uncertain about noise floor at the output of LNA.  I dont know exact SNR at the input of LNA but it shall be good.System operation will be on stepped CW waveform so receiver input signal will sweep over the BW and some step size.

so LNA output r noise floor will be? i assume, we can neglect thr role of input noise because it will be lesser than internal noise of LNA.

will it be LNA internal noise (Thermal noise due to BW) only ?

will it be LNA internal noise (Thermal noise due to BW)  + LAN Gain ? -78+25 =-53 dB? internal noise shall be lesser because NF is less than 3 . 

i have practically observed that that output noise floor is much lesser then even thermal noise( over LNA BW). i have gone through some tutorial where  formula says( internal noise+input noise)+gain. in  my case input noise shall be much less than theoretical internal noise. 

Thanks




ive

How to run a regressive test and merge the ncsim.trn file of all test into a single file to view the waveform in simvision ?

Hi all,

         I want to know how to run a regressive test in cadence and merge all ncsim .trn file of each test case into a single file to view all waveform in simvision. I am using Makefile to invoke the test case.

         eg:-

               test0:

                     irun -uvm -sv -access +rwc $(RTL) $(INTER) $(PKG) $(TOP) $(probe) +UVM_VERBOSITY=UVM_MEDIUM +UVM_TESTNAME=test0

             test1:

                   irun -uvm -sv -access +rwc $(RTL) $(INTER) $(PKG) $(TOP) $(probe) +UVM_VERBOSITY=UVM_MEDIUM +UVM_TESTNAME=test1

          I just to call test0 followed by test1 or parallel both test and view the waveform for both tests case.

        I new to this tool and help me with it

                     




ive

Incisive Metrics Center User Guide

Hi Team,

I would like to download "Incisive Metrics Center User Guide", I could not find in the cadence/support/manuals. Can you please provide me the link or path to download the same ? I am doing functional coverage with IMC. 

Thank You,

Mahesh




ive

How to refer the library compiled by INCISIVE 13.20 in Xcelium 19.30

Hi,

I am facing this elaboration error when using Xcelium:

Command>

    xmverilog -v200x +access+r +xm64bit -f vlist -reflib plib -timescale 1ns/1ps

Log>

    xmelab: *E,CUVMUR (<name>.v,538|18): instance 'LUTP0.C GLAT3' of design unit 'tlatntscad12' is unresolved in 'worklib.LUTP0:v'.

I guess the plib was not referred to as the simulation configuration because the tlatntscad12 is included in plib.

The plib is compiled by INCISIVE 13.20 and I am using the Xcelium 19.30.

Please tell me the correct command on how to refer to the library directory compiled by different versions.

Thank you,




ive

IC Packagers: Five Steps to IC-Driven Package Design

They say Moore's law is slowing. It may be slowing but it is still running - it has not stopped! And, it has been running at full throttle for quite a few decades now.  The net result of this run? Well, you can't design ICs in isolation from the...(read more)



  • Allegro Package Designer

ive

IC Packagers: Time-Saving Alternatives to Show Element

In the Allegro back-end layout products like Allegro Package Designer Plus, it would be reasonable to assume that the most often used command is none other than “show element” (shortcut key F4). This command, runnable at nearly any t...(read more)



  • Allegro Package Designer
  • Allegro PCB Editor

ive

Einstein's puzzle (System Verilog) solved by Incisive92

Hello All,

Following is the einstein's puzzle solved by cadence Incisive92  (solved in less than 3 seconds -> FAST!!!!!!)

Thanks,

Vinay Honnavara

Verification engineer at Keyu Tech

vinayh@keyutech.com

 

 

 

 // Author: Vinay Honnavara

// Einstein formulated this problem : he said that only 2% in the world can solve this problem
// There are 5 different parameters each with 5 different attributes
// The following is the problem

// -> In a street there are five houses, painted five different colors (RED, GREEN, BLUE, YELLOW, WHITE)

// -> In each house lives a person of different nationality (GERMAN, NORWEGIAN, SWEDEN, DANISH, BRITAIN)

// -> These five homeowners each drink a different kind of beverage (TEA, WATER, MILK, COFFEE, BEER),

// -> smoke different brand of cigar (DUNHILL, PRINCE, BLUE MASTER, BLENDS, PALL MALL)

// -> and keep a different pet (BIRD, CATS, DOGS, FISH, HORSES)


///////////////////////////////////////////////////////////////////////////////////////
// *************** Einstein's riddle is: Who owns the fish? ***************************
///////////////////////////////////////////////////////////////////////////////////////

/*
Necessary clues:

1. The British man lives in a red house.
2. The Swedish man keeps dogs as pets.
3. The Danish man drinks tea.
4. The Green house is next to, and on the left of the White house.
5. The owner of the Green house drinks coffee.
6. The person who smokes Pall Mall rears birds.
7. The owner of the Yellow house smokes Dunhill.
8. The man living in the center house drinks milk.
9. The Norwegian lives in the first house.
10. The man who smokes Blends lives next to the one who keeps cats.
11. The man who keeps horses lives next to the man who smokes Dunhill.
12. The man who smokes Blue Master drinks beer.
13. The German smokes Prince.
14. The Norwegian lives next to the blue house.
15. The Blends smoker lives next to the one who drinks water.
*/




typedef enum bit [2:0]  {red, green, blue, yellow, white} house_color_type;
typedef enum bit [2:0]  {german, norwegian, brit, dane, swede} nationality_type;
typedef enum bit [2:0]  {coffee, milk, water, beer, tea} beverage_type;
typedef enum bit [2:0]  {dunhill, prince, blue_master, blends, pall_mall} cigar_type;
typedef enum bit [2:0]  {birds, cats, fish, dogs, horses} pet_type;




class Einstein_problem;

    rand house_color_type house_color[5];
    rand nationality_type nationality[5];
    rand beverage_type beverage[5];
    rand cigar_type cigar[5];
    rand pet_type pet[5];
        rand int arr[5];
    
    constraint einstein_riddle_solver {
    
        
    
        foreach (house_color[i])
            foreach (house_color[j])
               if (i != j)
                house_color[i] != house_color[j];
        foreach (nationality[i])
            foreach (nationality[j])
               if (i != j)
                nationality[i] != nationality[j];
        foreach (beverage[i])
            foreach (beverage[j])
               if (i != j)
                beverage[i] != beverage[j];
        foreach (cigar[i])
            foreach (cigar[j])
               if (i != j)
                cigar[i] != cigar[j];
        foreach (pet[i])
            foreach (pet[j])
               if (i != j)
                pet[i] != pet[j];
    
    
        //1) The British man lives in a red house.
        foreach(nationality[i])
                (nationality[i] == brit) -> (house_color[i] == red);
                
        
        //2) The Swedish man keeps dogs as pets.
        foreach(nationality[i])
                (nationality[i] == swede) -> (pet[i] == dogs);
                
                
        //3) The Danish man drinks tea.        
        foreach(nationality[i])
                (nationality[i] == dane) -> (beverage[i] == tea);
        
        
        //4) The Green house is next to, and on the left of the White house.
        foreach(house_color[i])        
                 if (i<4)
                    (house_color[i] == green) -> (house_color[i+1] == white);
        
        
        //5) The owner of the Green house drinks coffee.
        foreach(house_color[i])
                (house_color[i] == green) -> (beverage[i] == coffee);
                
        
        //6) The person who smokes Pall Mall rears birds.
        foreach(cigar[i])
                (cigar[i] == pall_mall) -> (pet[i] == birds);
        
        
        //7) The owner of the Yellow house smokes Dunhill.
        foreach(house_color[i])
                (house_color[i] == yellow) -> (cigar[i] == dunhill);
        
        
        //8) The man living in the center house drinks milk.
        foreach(house_color[i])
                if (i==2) // i==2 implies the center house (0,1,2,3,4) 2 is the center
                    beverage[i] == milk;
        
        
        
        //9) The Norwegian lives in the first house.
        foreach(nationality[i])        
                if (i==0) // i==0 is the first house
                    nationality[i] == norwegian;
        
        
        
        //10) The man who smokes Blends lives next to the one who keeps cats.
        foreach(cigar[i])        
                if (i==0) // if the man who smokes blends lives in the first house then the person with cats will be in the second
                    (cigar[i] == blends) -> (pet[i+1] == cats);
        
        foreach(cigar[i])        
                if (i>0 && i<4) // if the man is not at the ends he can be on either side
                    (cigar[i] == blends) -> (pet[i-1] == cats) || (pet[i+1] == cats);
        
        foreach(cigar[i])        
                if (i==4) // if the man is at the last
                    (cigar[i] == blends) -> (pet[i-1] == cats);
        
        foreach(cigar[i])        
                if (i==4)
                    (pet[i] == cats) -> (cigar[i-1] == blends);
        
        
        //11) The man who keeps horses lives next to the man who smokes Dunhill.
        foreach(pet[i])
                if (i==0) // similar to the last case
                    (pet[i] == horses) -> (cigar[i+1] == dunhill);
        
        foreach(pet[i])        
                if (i>0 & i<4)
                    (pet[i] == horses) -> (cigar[i-1] == dunhill) || (cigar[i+1] == dunhill);
                    
        foreach(pet[i])        
                if (i==4)
                    (pet[i] == horses) -> (cigar[i-1] == dunhill);
                    


        //12) The man who smokes Blue Master drinks beer.
        foreach(cigar[i])
                (cigar[i] == blue_master) -> (beverage[i] == beer);
        
        
        //13) The German smokes Prince.
        foreach(nationality[i])        
                (nationality[i] == german) -> (cigar[i] == prince);
        

        //14) The Norwegian lives next to the blue house.
        foreach(nationality[i])
                if (i==0)
                    (nationality[i] == norwegian) -> (house_color[i+1] == blue);
        
        foreach(nationality[i])        
                if (i>0 & i<4)
                    (nationality[i] == norwegian) -> (house_color[i-1] == blue) || (house_color[i+1] == blue);
        
        foreach(nationality[i])        
                if (i==4)
                    (nationality[i] == norwegian) -> (house_color[i-1] == blue);
        

        //15) The Blends smoker lives next to the one who drinks water.            
        foreach(cigar[i])            
                if (i==0)
                    (cigar[i] == blends) -> (beverage[i+1] == water);
        
        foreach(cigar[i])        
                if (i>0 & i<4)
                    (cigar[i] == blends) -> (beverage[i-1] == water) || (beverage[i+1] == water);
                    
        foreach(cigar[i])        
                if (i==4)
                    (cigar[i] == blends) -> (beverage[i-1] == water);
        
    } // end of the constraint block
    


    // display all the attributes
    task display ;
        foreach (house_color[i])
            begin
                $display("HOUSE : %s",house_color[i].name());
            end
        foreach (nationality[i])
            begin
                $display("NATIONALITY : %s",nationality[i].name());
            end
        foreach (beverage[i])
            begin
                $display("BEVERAGE : %s",beverage[i].name());
            end
        foreach (cigar[i])
            begin
                $display("CIGAR: %s",cigar[i].name());
            end
        foreach (pet[i])
            begin
                $display("PET : %s",pet[i].name());
            end
        foreach (pet[i])
            if (pet[i] == fish)
                $display("THE ANSWER TO THE RIDDLE : The %s has %s ", nationality[i].name(), pet[i].name());
    
    endtask // end display
    
    
endclass




program main ;

    initial
        begin
            Einstein_problem ep;
            ep = new();
            if(!ep.randomize())
                $display("ERROR");
            ep.display();
        end
endprogram // end of main

        




ive

Sudoku solver using Incisive Enterprise Verifier (IEV) and Assertion-Driven Simulation (ADS)

Just in time for the holidays, inside the posted tar ball is some code to solve 9x9 Sudoku puzzles with the Assertion-Driven Simulation (ADS) capability of Incisive Enterprise Verifier (IEV). Enjoy! Joerg Mueller Solutions Engineer for Team Verify




ive

Importing a capacitor interactive model from manufacturer

Hello,

I am trying to import (in spectre) an spice model of a ceramic capacitor manufactured by Samsung EM. The link that includes the model is here :-

http://weblib.samsungsem.com/mlcc/mlcc-ec.do?partNumber=CL05A156MR6NWR

They proved static spice model and interactive spice model.

I had no problem while including the static model.

However, the interactive model which models voltage and temperature coefficients seems to not be an ordinary spice model. They provide HSPICE, LTSPICE, and PSPICE model files and I failed to include any of them.

Any suggestions ?




ive

Library Characterization Tidbits: Exploring Intuitive Means to Characterize Large Mixed-Signal Blocks

Let’s review a key characteristic feature of Cadence Liberate AMS Mixed-Signal Characterization that offers to you ease of use along with many other benefits like automation of standard Liberty model creation and improvement of up to 20X throughput.(read more)





ive

Five Reasons I'm Excited About Mixed-Signal Verification in 2015

Key Findings: Many more design teams will be reaching the mixed-signal methodology tipping point in 2015. That means you need to have a (verification) plan, and measure and execute against it.

As 2014 draws to a close, it is time to look ahead to the coming years and make a plan. While the macro view of the chip design world shows that is has been a mixed-signal world for a long time, it is has been primarily the digital teams that have rapidly evolved design and verification practices over the past decade. Well, I claim that is about to change. 2015 will be a watershed year for many more design teams because of the following factors:

  • 85% of designs are mixed signal, and it is going to stay that way (there is no turning back)
  • Advanced node drives new techniques, but they will be applied on all nodes
  • Equilibrium of mixed-signal designs being challenged, complexity raises risk level
  • Tipping point signs are evident and pervasive, things are going to change
  • The convergence of “big A” and “big D” demands true mixed-signal practices

Reason 1: Mixed-signal is dominant

To begin the examination of what is going to change and why, let’s start with what is not changing. IBS reports that mixed signal accounts for over 85% of chip design starts in 2014, and that percentage will rise, and hold steady at 85% in the coming years. It is a mixed-signal world and there is no turning back!

 

Figure 1. IBS: Mixed-signal design starts as percent of total

The foundational nature of mixed-signal designs in the semiconductor industry is well established. The reason it is exciting is that a stable foundation provides a platform for driving change. (It’s hard to drive on crumbling infrastructure.  If you’re from California, you know what I mean, between the potholes on the highways and the earthquakes and everything.)

Reason 2: Innovation in many directions, mostly mixed-signal applications

While the challenges being felt at the advanced nodes, such as double patterning and adoption of FinFET devices, have slowed some from following onto to nodes past 28nm, innovation has just turned in different directions. Applications for Internet of Things, automotive, and medical all have strong mixed-signal elements in their semiconductor content value proposition. What is critical to recognize is that many of the design techniques that were initially driven by advanced-node programs have merit across the spectrum of active semiconductor process technologies. For example, digitally controlled, calibrated, and compensated analog IP, along with power-reducing mutli-supply domains, power shut-off, and state retention are being applied in many programs on “legacy” nodes.

Another graph from IBS shows that the design starts at 45nm and below will continue to grow at a healthy pace.  The data also shows that nodes from 65nm and larger will continue to comprise a strong majority of the overall starts. 


Figure 2.  IBS: Design starts per process node

TSMC made a comprehensive announcement in September related to “wearables” and the Internet of Things. From their press release:

TSMC’s ultra-low power process lineup expands from the existing 0.18-micron extremely low leakage (0.18eLL) and 90-nanometer ultra low leakage (90uLL) nodes, and 16-nanometer FinFET technology, to new offerings of 55-nanometer ultra-low power (55ULP), 40ULP and 28ULP, which support processing speeds of up to 1.2GHz. The wide spectrum of ultra-low power processes from 0.18-micron to 16-nanometer FinFET is ideally suited for a variety of smart and power-efficient applications in the IoT and wearable device markets. Radio frequency and embedded Flash memory capabilities are also available in 0.18um to 40nm ultra-low power technologies, enabling system level integration for smaller form factors as well as facilitating wireless connections among IoT products.

Compared with their previous low-power generations, TSMC’s ultra-low power processes can further reduce operating voltages by 20% to 30% to lower both active power and standby power consumption and enable significant increases in battery life—by 2X to 10X—when much smaller batteries are demanded in IoT/wearable applications.

The focus on power is quite evident and this means that all of the power management and reduction techniques used in advanced node designs will be coming to legacy nodes soon.

Integration and miniaturization are being pursued from the system-level in, as well as from the process side. Techniques for power reduction and system energy efficiency are central to innovations under way.  For mixed-signal program teams, this means there is an added dimension of complexity in the verification task. If this dimension is not methodologically addressed, the level of risk adds a new dimension as well.

Reason 3: Trends are pushing the limits of established design practices

Risk is the bane of every engineer, but without risk there is no progress. And, sometimes the amount of risk is not something that can be controlled. Figure 3 shows some of the forces at work that cause design teams to undertake more risk than they would ideally like. With price and form factor as primary value elements in many growing markets, integration of analog front-end (AFE) with digital processing is becoming commonplace.  

 

Figure 3.  Trends pushing mixed-signal out of equilibrium

The move to the sweet spot of manufacturing at 28nm enables more integration, while providing excellent power and performance parameters with the best cost per transistor. Variation becomes great and harder to control. For analog design, this means more digital assistance for calibration and compensation. For greatest flexibility and resiliency, many will opt for embedding a microcontroller to perform the analog control functions in software. Finally, the first wave of leaders have already crossed the methodology bridge into true mixed-signal design and verification; those who do not follow are destined to fall farther behind.

Reason 4: The tipping point accelerants are catching fire

The factors cited in Reason 3 all have a technical grounding that serves to create pain in the chip-development process. The more factors that are present, the harder it is to ignore the pain and get the treatment relief  afforded by adopting known best practices for truly mixed-signal design (versus divide and conquer along analog and digital lines design).

In the past design performance was measured in MHz with simple static timing and power analysis. Design flows were conveniently partitioned, literally and figuratively, along analog and digital boundaries. Today, however, there are gigahertz digital signals that interact at the package and board level in analog-like ways. New, dynamic power analysis methods enabled by advanced library characterization must be melded into new design flows. These flows comprehend the growing amount of feedback between analog and digital functions that are becoming so interlocked as to be inseparable. This interlock necessitates design flows that include metrics-driven and software-driven testbenches, cross fabric analysis, electrically aware design, and database interoperability across analog and digital design environments.


Figure 4.  Tipping point indicators

Energy efficiency is a universal driver at this point.  Be it cost of ownership in the data center or battery life in a cell phone or wearable device, using less power creates more value in end products. However, layering multiple energy management and optimization techniques on top of complex mixed-signal designs adds yet more complexity demanding adoption of “modern” mixed-signal design practices.

Reason 5: Convergence of analog and digital design

Divide and conquer is always a powerful tool for complexity management.  However, as the number of interactions across the divide increase, the sub-optimality of those frontiers becomes more evident. Convergence is the name of the game.  Just as analog and digital elements of chips are converging, so will the industry practices associated with dealing with the converged world.


Figure 5. Convergence drivers

Truly mixed-signal design is a discipline that unites the analog and digital domains. That means that there is a common/shared data set (versus forcing a single cockpit or user model on everyone). 

In verification the modern saying is “start with the end in mind”. That means creating a formal approach to the plan of what will be test, how it will be tested, and metrics for success of the tests. Organizing the mechanics of testbench development using the Unified Verification Methodology (UVM) has proven benefits. The mixed-signal elements of SoC verification are not exempted from those benefits.

Competition is growing more fierce in the world for semiconductor design teams. Not being equipped with the best-known practices creates a competitive deficit that is hard to overcome with just hard work. As the landscape of IC content drives to a more energy-efficient mixed-signal nature, the mounting risk posed by old methodologies may cause causalities in the coming year. Better to move forward with haste and create a position of strength from which differentiation and excellence in execution can be forged.

Summary

2015 is going to be a banner year for mixed-signal design and verification methodologies. Those that have forged ahead are in a position of execution advantage. Those that have not will be scrambling to catch up, but with the benefits of following a path that has been proven by many market leaders.



  • uvm
  • mixed signal design
  • Metric-Driven-Verification
  • Mixed Signal Verification
  • MDV-UVM-MS

ive

QSPI Direct Access bare metal SW driver

Hello,

I'm reading the Design specification for IP6514E.

We will use the DAC mode.

It would seem to be very simple but I don't see any code sequence, i.e.

  1.Write 03(Basic Read) to this register

  2, Write start adress to this register

  3. Write "execute" to this register

  4. Read the data from this register

Thanks,

Stefan




ive

Live : પાટીદાર મહિલા સંમેલન: એજ આંદોલન સફળ થયા છે જે ઉ.ગુમાંથી શરૂ થયા હોય

#મહેસાણામાં મહિલા પાટીદાર સંમેલન શરૂ થયું છે. શહેરના મોઢેરા રોડ પર જય સરદાર જય પાટીદારના નારા ગૂંજી ઉઠ્યા છે. તો બીજી તરફ મહિલા સંમેલનને શરતી મંજૂરી આપવામાં આવી હતી. જોકે અહીં શરતોનું ઉલ્લંઘન થયું હોવાનું જાણવા મળી રહ્યું છે. સંમેલનનું લાઇવ રીપોર્ટીંગ જોવો..




ive

EXCLUSIVE: লকডাউনে পরিবারের সঙ্গে কেমন সময় কাটাচ্ছেন সৌরভ ? দেখে নিন




ive

Exclusive: શ્રમિકોને ઘરે પહોંચાડવા ટ્રેન દોડાવા અંગે વિચાર કરી રહી છે સરકાર

સરકારે રેલવેને વહેલી તકે પોઇન્ટ ટૂ પોઇન્ટ એટલે કે નોનસ્ટોપ ટ્રેન દોડાવવા માટે એક યોજના બનાવીને આપવા માટે કહ્યું છે




ive

Coronavirus Live: દિલ્હીનાં એક વિસ્તારમાં એકસાથે 41 લોકો કોરોના પોઝિટિવ




ive

Covid 19 live: અમદાવાદ સિવિલ હૉસ્પિટલ પર કોરોના વોરિયર્સ માટે એરફોર્સે કરી પુષ્પવર્ષા




ive

COVID-19 LIVE: દેશમાં કોરોનાનાં 29453 એક્ટિવ કેસ, 1373 લોકોનાં મોત, 11706 દર્દી સાજા થયા




ive

Covid 19 live: દિલ્હીની હૉસ્પિટલનાં 25 સ્ટાફ સંક્રમિત થતા દર્દી, કર્મીઓને ક્વૉરન્ટાન કરાયા




ive

Covid 19 Live: દેશમાં કોરોનાના કુલ કેસ 50 હજાર નજીક, 1,694 મોત, 14,182 દર્દીઓ સાજા થયા




ive

Coronavirus Live : દેશમાં કોરોનાના 52,952 કેસ, કુલ 1,783ના મોત, 15,266 દર્દી સાજા થયા




ive

COVID-19 Live : મહારાષ્ટ્રમાં કોરોનાના દર્દીની સંખ્યા 19063 અને તમિલનાડુમાં 6 હજારને પાર