cat

RBI asks Banks to educate public

Banks to educate public against placing deposits in dubious schemes




cat

Clarification About Indian Rupee Demonetisation Of Notes

Answers To Indian Rupee Demonetization Of Rs 500, 1000 currency Notes




cat

~$CPIL$368464$title$textbox$Zoetis' Promotion of Cattleman's College "Adds Whole New Dimension"$/CPIL$~




cat

~$CPIL$368466$title$textbox$Helping to Re-home Sydney's Dogs and Cats$/CPIL$~




cat

Overcast and 30 F at CATTARAUGUS COUNTY-OLEAN, NY


Winds are from the West at 13.8 gusting to 21.9 MPH (12 gusting to 19 KT). The humidity is 64%. The wind chill is 19. Last Updated on May 9 2020, 11:55 am EDT.




cat

Coronavirus is a crisis for the developing world, but here's why it needn't be a catastrophe | Esther Duflo & Abhijit Banerjee

A radical new form of universal basic income could revitalise damaged economies

  • Esther Duflo and Abhijit Banerjee won the 2019 Nobel prize in economics for their work on poverty alleviation
  • Coronavirus – latest updates
  • See all our coronavirus coverage
  • While countries in east Asia and Europe are gradually taking steps towards reopening their economies, many in the global south are wondering whether the worst of the pandemic is yet to come. As economists who work on poverty alleviation in developing countries, we are often asked what the effects of coronavirus will be in south Asia and Africa. The truth is, we don’t know. Without extensive testing to map the number of cases, it’s impossible to tell how far the virus has already spread. We don’t yet have enough information about how Covid-19 behaves under different conditions such as sunlight, heat and humidity. Developing countries’ more youthful populations may spare them the worst of the pandemic, but health systems in the global south are poorly equipped to deal with an outbreak, and poverty is linked to co-morbidities that put people at a higher risk of serious illness.

    Without the information widespread testing provides, many poorer countries have taken an extremely cautious approach. India imposed a total lockdown on 24 March, by which time the country had about 500 confirmed cases. Countries such as Rwanda, South Africa and Nigeria enforced lockdowns in late March, long before the virus was expected to peak. But these lockdown measures can’t last forever. Poorer countries could have used the quarantine to buy time, gather information about how the disease behaves and develop a testing and tracing strategy. Unfortunately, not much of this has happened. And, far from coming to their aid, rich countries have outrun poorer nations in the race for PPE, oxygen and ventilators.

    Continue reading...




    cat

    Harbaugh advocates rule changes for NFL draft

    Michigan coach Jim Harbaugh has made an open-letter proposal suggesting more flexibility for players considering the jump from college to the NFL, including returning to college if they go undrafted.




    cat

    [Cross Country] Cross Country Travels to Bearcat Open 9/6/19!

    Tomorrow, September 6, 2019, Haskell XC will compete in Bearcat open against Northwest MIssouri State!




    cat

    Verification of the Lane Adapter FSM of a USB4 Router Design Is Not Simple

    Verifying lane adapter state machine in a router design is quite an involved task and needs verification from several aspects including that for its link training functionality.

    The diagram below shows two lane adapters connected to each other and each going through the link training process. Each training sub-state transition is contingent on conditions for both transmission and reception of relevant ordered sets needed for a transition. Until conditions for both are satisfied an adapter cannot transition to the next training sub-state.

    As deduced from the lane adapter state machine section of USB4 specification, the reception condition for the next training sub-state transition is less strict than that of the transmission condition. For ex., for LOCK1 to LOCK2 transition, the reception condition requires only two SLOS symbols in a row being detected, while the transmission condition requires at least four complete SLOS1 ordered sets to be sent.

    From the above conditions in the specification, it is a possibility that a lane adapter A may detect the two SLOS or TS ordered sets, being sent by the lane adapter B on the other end, in the very beginning as soon as it starts transmitting its own SLOS or TS ordered sets. On the other hand, it is also a possibility that these SLOS or TS ordered sets are not yet detected by lane adapter A even when it has met the condition of sending minimum number of SLOS or TS ordered sets.

    In such a case, lane adapter A, even though it has satisfied the transmission condition cannot transition to the next sub-state because the reception condition is not yet met. Hence lane adapter A must first wait for the required number of ordered sets to be detected by it before it can go to the next sub-state. But this wait cannot be endless as there are timeouts defined in the specification, after which the training process may be re-attempted.

    This interlocked way of operation also ensures that state machine of a lane adapter does not go out of sync with that of the other lane adapter. Such type of scenarios can occur whenever lane adapter state machine transitions to the training state from other states.

    Cadence has a mature Verification IP solution for the verification of various aspects of the logical layer of a USB4 router design, with verification capabilities provided to do a comprehensive verification of it.




    cat

    Cadence JasperGold Brings Formal Verification into Mainstream IC Verification Flows

    Formal verification is a complex technology that has traditionally required experts or specialized teams who stood apart from the IC design and verification flow. Taking a different approach, a new release of the Cadence JasperGold formal verification platform (June 8, 2015) provides formal techniques that complement simulation, emulation, and debugging in the form of “Apps” or under-the-hood solutions that any design or verification engineer can use.

    JasperGold was the initial (in fact only) product of Jasper Design Automation, acquired by Cadence in 2014. Jasper pioneered the formal Apps concept several years ago. While the company had previously sold JasperGold as a one-size-fits-all solution, Jasper began selling semi-automated JasperGold Apps that solved specific problems using formal analysis technology.

    The new release is the next generation of JasperGold and will be available later this month. It includes three major improvements over previous Cadence and Jasper formal analysis offerings:

    • A unified Cadence Incisive and JasperGold formal verification platform delivers up to 15X performance gain over previous solutions.
    • JasperGold is integrated into the Cadence System Development Suite, where it provides formal-assisted simulation, emulation, and coverage. As a result, System Development Suite users can find bugs three months earlier than existing verification methods.
    • JasperGold’s formal analysis engines are integrated with the recently announced Indago debug platform, automating root cause analysis and on-the-fly, what-if exploration.

    Best of Both Formal Verification Worlds

    Taking advantage of technologies from both Cadence and Jasper, the new JasperGold represents a “best of both worlds” solution, according to Pete Hardee, product management director at Cadence. This solution combines technologies from the Cadence Incisive Enterprise Verifier and Incisive Formal Verifier with JasperGold formal analysis engines.

    For example, to ease migration from Incisive formal tools, Cadence has integrated an Incisive common front end into the JasperGold apps platform. Jasper formal engines can run within the Incisive run-time environment. Cadence has also brought some selected Incisive formal engines into JasperGold.

    As shown to the right, the JasperGold platform supports both the existing JasperGold front-end parser and the Incisive front-end parser. Hardee observed that this dual parser arrangement simplifies migration from Incisive formal tools to JasperGold, and provides a common compilation environment for people who want to use JasperGold with Incisive simulation. Further, the common run-time environment enables formal-assisted simulation.

    The combination of JasperGold engines and Incisive engines supports two use models for formal analysis: formal proofs and bug hunting. In the first case, formal engines try all combinations of inputs without a testbench. The test is driven by formal properties written in languages such as SVA (SystemVerilog assertions) or PSL (Property Specification Language). Completion of a property is exhaustive proof that something can or cannot happen. This provides a “much stronger result” than simulation, Hardee said.

    He also noted that formal analysis doesn’t necessarily require that all properties are completed. “You can get a lot of value even if proofs don’t complete,” he said. “Proofs that run deep enough to find bugs are just fine.”

    Bug hunting involves random searches, and JasperGold bug hunting engines are very fast. However, these engines don’t necessarily use the most optimal path to get to a bug. So, Cadence engineers brought a constraint solver from Incisive and integrated it into JasperGold. “It looks at the constraints in the environment and gives you a better starting point,” Hardee said. “It takes more up-front time, but once you’ve done that the bug hunting engines can actually take a shorter path and find a bug a lot quicker.”

    Another new JasperGold capability from the Incisive Formal Verifier is called “search pointing.” This uses simulation to penetrate deeply into the state space, and then kicks off a random formal search from a given point that you’ve reached in simulation. This technique makes it possible to find bugs that are very deep in the design.

    It is probably clear by now that a number of different formal “engines” may be required to solve a given verification problem. Traditionally, a formal tool (or user) will farm a problem out to many engines and see which one works best. To put more intelligence into that process, Cadence launched the Trident “multi-cooperating engine” a couple of years ago. That has now been brought into JasperGold, where it helps “orchestrate” the engines according to what will work best for the design. This is a big part of the reason for the 15X speedup noted earlier in this post.

    Integration with System Development Suite

    The Cadence System Development Suite is an integrated set of hardware/software development and verification engines, including virtual prototyping, Incisive simulation, emulation, and FPGA-based prototyping. As shown below, JasperGold technology is integrated into the System Development Suite in several places, including formal-assisted debug, formal-assisted verification closure, formal-assisted simulation, formal-assisted emulation, and the Incisive vManager verification planning tool.

    Formal-assisted emulation sounds like it should be easy, especially since Cadence has both accelerated verification IP (VIP) and assertion-based VIP. However, there’s a complication. Accelerated VIP represents less verification content than simulation VIP, because you have to remove many checkers to get VIP to compile on a Palladium emulator. That’s because the Palladium requires synthesizable code.

    What you can do, however, is use assertion-based VIP in “snoop mode” as shown below. Assertion-based VIP coded in synthesizable SystemVerilog can replace the missing checkers in accelerated VIP. In this diagram, everything in the green box is running in the emulator and is thus completely accelerated.

     

    Another example of formal-assisted emulation has to do with deep traces. As Hardee noted, emulation will produce very long traces, and it can be very difficult to find a point of interest in the trace and determine what caused an error. With formal-assisted emulation, users can find interesting events within the traces and create properties that mark them, so a debugger can find these events and trace back to the root cause.

    Formal-assisted verification closure is available with the new JasperGold release. This is possible because you can use the vManager product to determine which tasks were completed by formal engines. It’s important information for verification managers who are not used to formal tools, Hardee noted.

    Another aspect of formal-assisted verification closure is the JasperGold Unreachability Analysis (UNR) App, which can save simulation users weeks of time and effort. This App takes in the simulation coverage database and RTL, and automatically generates properties to explore coverage holes and determine if holes are reachable or unreachable. The App then generates an unreachable coverage point database. If the unreachable code does something useful, there’s a bug in the design or the testbench; if not, you don’t have to worry about it. The diagram below shows how it works.

    Formal-Assisted Debugging

    The third major component of the JasperGold announcement is the integration of formal analysis into the Indago debugging platform. As shown below, this platform has several apps, including the Indago Debug Analyzer. Two formal debug capabilities from the Jasper Visualize environment have been added to the the Indago Debug Analyzer:

    • Highlight Relevant Logic: This highlights the “cone of influence,” or the logic that is involved in reaching a given point
    • Why: This button highlights the immediate causes for a given event, and allows users to trace backwards in time

     

    More formal capabilities will come with the Indago Advanced Debug Analyzer app, scheduled for release towards the end of 2015. This includes Quiet Trace, a Jasper capability that reduces trace activity to transactions relevant to an event. Also, a what-if analysis allows on-the-fly trace editing and recalculation to explore effects and sensitivities, without having to re-compile and re-execute the simulation.

    Finally, Cadence has a Superlint flow that is now fully integrated with the JasperGold Visualize debugger. This two-tiered flow includes a basic lint capability as well as automated formal analysis based on the JasperGold Structural Property Synthesis app. “This could be a very good entry point for designers to start using formal,” Hardee said.

    “Formal is taking off,” Hardee concluded. “People are no longer talking about return on investment for formal—they have established that. Now they’re supporting a proliferation of formal in their companies such that a wider set of people experience the benefit from that proven return on investment.”

    Further information is available at the JasperGold Formal Verification Platform (Apps) page.

    Richard Goering

    Related Blog Posts

    JUG Keynote—How Jasper Formal Verification Technology Fits into the Cadence Flow

    Why Cadence Bought Jasper—A New Era in Formal Analysis

    Q&A: An R&D Perspective on Formal Verification—Past, Present and Future




    cat

    Mediatek Deploys Perspec for SoC Verification of Low Power Management (part 3 of 3)

    Here we conclude the blog series and highlight the results of Mediatek 's use of Cadence Perspec™ System Verifier for their SoC level verification. In case you missed it, Part 1 of the blog is here , and Part 2 of the blog is here . One of their key...(read more)




    cat

    Cadence Collaborates with Test & Verification Solutions on Portable Stimulus

    The Cadence® Connections® Verification Program brings together a worldwide network of services, training, and IP development experts that support Cadence verification solutions. The program members help customer accelerate the adoption of new...(read more)




    cat

    Preparing Accellera Portable Stimulus Standard for Ratification

    The Accellera Portable Stimulus Working Group met at the DVCon 2018 to move the process forward towards ratification. While we can't predict exactly when it will be ratified, the goal is now more clearly in sight! Cadence booth was busy with a lo...(read more)




    cat

    What’s Hot in Verification at this Year’s CDNLive? It’s Portable Stimulus Again!

    CDNLive is a user conference, and verification is one of the largest categories of content with multiple tracks covering multiple days. Portable stimulus is one of the hottest new areas in verification, and continues to be popular in all venues. At l...(read more)




    cat

    Integration and Verification of PCIe Gen4 Root Complex IP into an Arm-Based Server SoC Application

    Learn about the challenges and solutions for integrating and verification PCIe(r) Gen4 into an Arm-Based Server SoC. Listen to this relatively short webinar by Arm and Cadence, as they describe the collaboration and results, including methodology and...(read more)




    cat

    Verification Reflections on 2018

    In my predictions for 2018 I had identified five key trends driving verification in 2018 – Security, Safety, Application Specificity, Processor Ecosystems and System Design Enablement, all centered around ecosystems. Looking back now as the yea...(read more)




    cat

    detecting duplicate shapes

    Is there any way I can find duplicated shapes?

    I would like to find two or more shapes are identical and they are placed in the same location.




    cat

    My Journey - From a Layout Designer to an Application Engineer

    Today, we are living in the era where whatever we think of as an idea is not far from being implemented…thanks to machine learning (ML) and artificial intelligence (AI) entering into the...

    [[ Click on the title to access the full blog on the Cadence Community site. ]]




    cat

    Metamorphic Testing: The Future of Verification?

    Curious about what’s going on behind the scenes with verification? Bernard Murphy, Jim Hogan, and our own Paul Cunningham are on the case with the “Innovation in Verification” blog stream over at semiwiki.com. Every month, this trio reviews a newly-published paper in academia that pertains to verification and discusses its implications. Be sure to stop by—it’s a great place to see what might be coming down the pipeline someday.

    This month, they discuss the implications of metamorphic testing. The purpose of metamorphic testing is to define a verification approach where is there is no “golden reference.” This situation comes up a lot now as designs grow in complexity, and it begs the question: “how does one know the design is verified if there is no standard to compare to?”. Metamorphic testing addresses the problem of not having a “gold standard” to compare to by comparing the results of related tests instead. The paper reviewed by this team used metamorphic testing to study methods of managing JavaScript tags.

    Paul saw this as a valuable new class of coverage. Metamorphic testing represents a way to create better distribution analyses through understanding the relationships among tests. This can reveal critical-but-complex issues that traditional verification methods may overlook. He saw this as an emerging class of coverage that new verification tools could be built around. Paul asserted that a future metamorphic-testing-based tool’s main contribution to the field of verification would be to better analyze noisy performance results where the noise is multi-modal. It could be useful in detecting race conditions and similar hard-to-debug anomalies. Paul also sees metamorphic testing as ripe for ML techniques. Overall—Paul sees a bright future for metamorphic testing in verification.

    Jim is reminded of Solido and Spice—these metamorphic testing capabilities are “more than just a feature”—they might be a product. Maybe even a whole new class of verification tools, as Paul said.

    Bernard says that this topic is “too rich to address in one blog”, so be sure to head over to the post to see more of what the future has in store for verification.




    cat

    New Rapid Adoption Kit (RAK) Enables Productive Mixed-Signal, Low Power Structural Verification

    All engineers can enhance their mixed-signal low-power structural verification productivity by learning while doing with a PIEA RAK (Power Intent Export Assistant Rapid Adoption Kit). They can verify the mixed-signal chip by a generating macromodel for their analog block automatically, and run it through Conformal Low Power (CLP) to perform a low power structural check.  

    The power structure integrity of a mixed-signal, low-power block is verified via Conformal Low Power integrated into the Virtuoso Schematic Editor Power Intent Export Assistant (VSE-PIEA). Here is the flow.

     

    Applying the flow iteratively from lower to higher levels can verify the power structure.

    Cadence customers can learn more in a Rapid Adoption Kit (RAK) titled IC 6.1.5 Virtuoso Schematic Editor XL PIEA, Conformal Low Power: Mixed-Signal Low Power Structural Verification.

    The RAK includes Rapid Adoption Kit with demo design (instructions are provided on how to setup the user environment). It Introduces the Power Intent Export Assistant (PIEA) feature that has been implemented in the Virtuoso IC615 release.  The power intent extracted is then verified by calling Conformal Low Power (CLP) inside the Virtuoso environment.

    • Last Update: 11/15/2012.
    • Validated with IC 6.1.5 and CLP 11.1

    The RAK uses a sample test case to go through PIEA + CLP flow as follows:

    • Setup for PIEA
    • Perform power intent extraction
    • CPF Import: It is recommended to Import macro CPF, as oppose to designing CPF for sub-blocks. If you choose to import design CPF files please make sure the design CPF file has power domain information for all the top level boundary ports
    • Generate macro CPF and design CPF
    • Perform low power verification by running CLP

    It is also recommended to go through older RAKs as prerequisites.

    • Conformal Low Power, RTL Compiler and Incisive: Low Power Verification for Beginners
    • Conformal Low Power: CPF Macro Models
    • Conformal Low Power and RTL Compiler: Low Power Verification for Advanced Users

    To access all these RAKs, visit our RAK Home Page to access Synthesis, Test and Verification flow

    Note: To access above docs, use your Cadence credentials to logon to the Cadence Online Support (COS) web site. Cadence Online Support website https://support.cadence.com/ is your 24/7 partner for getting help and resolving issues related to Cadence software. If you are signed up for e-mail notifications, you can receive new solutions, Application Notes (Technical Papers), Videos, Manuals, and more.

    You can send us your feedback by adding a comment below or using the feedback box on Cadence Online Support.

    Sumeet Aggarwal




    cat

    New Incisive Low-Power Verification for CPF and IEEE 1801 / UPF

    On May 7, 2013 Cadence announced a 30% productivity gain in the June 2013 Incisive Enterprise Simulator 13.1 release.  Advanced debug visualization, faster turn-around time, and the extension of eight years of low-power verification innovation to IEEE 1801/UPF are the key capabilities in the release.

    When we talk about low-power verification its easy to equate it with simulation.  For certain, simulation is the heart of a low-power verification solution. Simulation enables engineers to run their design in the context of power intent.  The challenge is that a simulation-only approach is inadequate. For example, if engineers could achieve SoC quality by verifying the individual function of each power control module (PCM), then simulation could be enough.  For a single power domain, simulation can be enough. 

    However, when the SoC has multiple power domains -- and we have seen SoCs with hundreds of them -- engineers have to check the PCMs and all of the arcs between the power modes.  These SoCs often synchronize some of the domain switching to reduce overall complexity, creating the potential for signal skew errors on the control signals for the connected domains.  Managing these complexities requires verification methodologies including advanced debug, verification planning, assertion-based verification, Universal Verification Methodology - Low Power (UVM-LP), and more (see Figure 1).

     

    Figure 1:  Comprehensive Low-Power Verification 

    But even advanced verification methodologies on top of simulation aren't enough.  For example, the state machine that defines the legal and illegal power mode transitions is often written in software. The speed and capacity of the Palladium emulation platform is ideal to verify in this context, and it is integrated with simulation sharing debug, UVM acceleration, and static checks for low-power. And, it reports verification progress into a holistic plan for the SoC.  Another example is the ability to compare the design in the implementation flow with the design running in simulation to make sure that what we verify is what we intend to build.

    Taken together, verification across multiple engines provides the comprehensive low-power verification needed for today's advanced node SoCs.  That's the heart of this low-power verification announcement. 

    Another point you may have noticed is the extension of the Common Power Format (CPF) based power-aware support in the Incisive Enterprise Simulator to IEEE 1801.  We chose to bring IEEE 1801 to simulation first because users like you sometimes need to mix vendors for regression flows.  Over time, Cadence will extend the low-power capabilities throughout its product suite to IEEE 1801.

    If you are using CPF today, you already have the best low-power solution. The evidence is clear:  the upcoming IEEE 1801-2013 update includes many of the CPF features contributed to 1801/UPF to enable methodology convergence.  Since you already have those features in the CPF flow, any migration before you have a mature IEEE 1801-2013 tool flow would reduce the functionality you have today.

    If you are using Unified Power Format (UPF) 1.0 today, you want to start planning your move toward the IEEE 1801-2013 standard.  A good first step would be to move to the IEEE 1801-2009 standard.  It fills holes in the earlier UPF 1.0 definition.  While it does lack key features in -2013, it is an improvement that will make the migration to -2013 easier. The Incisive 13.1 release will run both UPF 1.0 and IEEE 1801-2009 power intent today.

    Over the next few weeks you'll see more technical blogs about the low-power capabilities coming in the Incisive 13.1 release.  You can also join us on June 19 for a webinar that will introduce those capabilities using the reference design supplied with the Incisive Enterprise Simulator release.

    =Adam "The Jouler" Sherer

    (Yes, "Sherilog" is still here.  :-) )




    cat

    Freescale Success Stepping Up to Low-Power Verification - Video

    Freescale was a successful Incisive® simulation CPF low-power user when they decided to step up their game. In November 2013, at CDNLive India, they presented a paper explaining how they improved their ability to find power-related bugs using a more sophisticated verification flow.  We were able to catch up with Abhinav Nawal just after his presentation to capture this video explaining the key points in his paper.

    Abhinav had already established a low-power simulation process using directed tests for a design with power intent captured in CPF. While that is a sound approach, it tends to focus on the states associated with each power control module and at least some of the critical power mode changes.  Since the full system can potentially exercise unforeseen combinations of power states, the directed test approach may be insufficient. Abhinav built a more complete low-power verification approach rooted in a low-power verification plan captured in Cadence® Incisive Enterprise Manager.  He still used Incisive Enterprise Simulator and the SimVision debugger to execute and debug his design, but he also added Incisive Metric Center to analyze coverage from his low-power tests and connect that data back to the low-power verification plan.  As a result, he was able to find many critical system-level corner case issues, which, left undetected, would have been catastrophic for his SoC.  In the paper, Abhinav presents some of the key problems this approach was able to find.

    You can achieve results similar to Abhinav. Incisive Enterprise Simulator can generate a low-power verification plan from the power format, power-aware assertions, and it can collect power-aware knowledge.  To get started, you can use the Incisive Low-Power Simulation Rapid Adoption Kit (RAK) for CPF available on Cadence Online Support.

    Just another happy Cadence low-power verification user!

    Regards,

    Adam "The Jouler" Sherer  

     

     




    cat

    PCB Editor SKILL program for finding pin location

    Hi,

    I wanted to find the location of a pin in the design using skill program. pin_dbids = axlDBGetDesign()->pins, this gives me all the dbids of the pins that are present in my design. But when im entering that dbid, pad = axlDBGetPad("000001EA8FD8B9F8" "package geometry/assembly_top" "regular") it is throwing an error stating "This dbid is not user defined. Please enter the user defined". So please provide me a snippet so that I can get the exact pin location in the design using skill script.




    cat

    How to get the location of Assembly Line

    Hi 

    I'm trying to find the location of the assembly line in the design automatically without using "Show Element". And also I want to find the end points of that line. The line exists in "Package Geometry/Assembly_Top" Layer. So is there any code snippet to find the location of assembly line?




    cat

    Custom pad shape and symbol, when placed on pcb pad locations move.

    Hi everybody,

    I've created a symbol with custom pad shapes. Everything looks correct in the symbol editor.

    And the 3d view looks correct (upside down to show placement)

    But when I try to place it on the pcb the 2 "T" shaped pads aren't in the correct location.

    I have the pad shape centered on the pad...

    with no offset on the padstack editor.

    Does anybody know how to fix this?

    Thank you!




    cat

    Five Reasons I'm Excited About Mixed-Signal Verification in 2015

    Key Findings: Many more design teams will be reaching the mixed-signal methodology tipping point in 2015. That means you need to have a (verification) plan, and measure and execute against it.

    As 2014 draws to a close, it is time to look ahead to the coming years and make a plan. While the macro view of the chip design world shows that is has been a mixed-signal world for a long time, it is has been primarily the digital teams that have rapidly evolved design and verification practices over the past decade. Well, I claim that is about to change. 2015 will be a watershed year for many more design teams because of the following factors:

    • 85% of designs are mixed signal, and it is going to stay that way (there is no turning back)
    • Advanced node drives new techniques, but they will be applied on all nodes
    • Equilibrium of mixed-signal designs being challenged, complexity raises risk level
    • Tipping point signs are evident and pervasive, things are going to change
    • The convergence of “big A” and “big D” demands true mixed-signal practices

    Reason 1: Mixed-signal is dominant

    To begin the examination of what is going to change and why, let’s start with what is not changing. IBS reports that mixed signal accounts for over 85% of chip design starts in 2014, and that percentage will rise, and hold steady at 85% in the coming years. It is a mixed-signal world and there is no turning back!

     

    Figure 1. IBS: Mixed-signal design starts as percent of total

    The foundational nature of mixed-signal designs in the semiconductor industry is well established. The reason it is exciting is that a stable foundation provides a platform for driving change. (It’s hard to drive on crumbling infrastructure.  If you’re from California, you know what I mean, between the potholes on the highways and the earthquakes and everything.)

    Reason 2: Innovation in many directions, mostly mixed-signal applications

    While the challenges being felt at the advanced nodes, such as double patterning and adoption of FinFET devices, have slowed some from following onto to nodes past 28nm, innovation has just turned in different directions. Applications for Internet of Things, automotive, and medical all have strong mixed-signal elements in their semiconductor content value proposition. What is critical to recognize is that many of the design techniques that were initially driven by advanced-node programs have merit across the spectrum of active semiconductor process technologies. For example, digitally controlled, calibrated, and compensated analog IP, along with power-reducing mutli-supply domains, power shut-off, and state retention are being applied in many programs on “legacy” nodes.

    Another graph from IBS shows that the design starts at 45nm and below will continue to grow at a healthy pace.  The data also shows that nodes from 65nm and larger will continue to comprise a strong majority of the overall starts. 


    Figure 2.  IBS: Design starts per process node

    TSMC made a comprehensive announcement in September related to “wearables” and the Internet of Things. From their press release:

    TSMC’s ultra-low power process lineup expands from the existing 0.18-micron extremely low leakage (0.18eLL) and 90-nanometer ultra low leakage (90uLL) nodes, and 16-nanometer FinFET technology, to new offerings of 55-nanometer ultra-low power (55ULP), 40ULP and 28ULP, which support processing speeds of up to 1.2GHz. The wide spectrum of ultra-low power processes from 0.18-micron to 16-nanometer FinFET is ideally suited for a variety of smart and power-efficient applications in the IoT and wearable device markets. Radio frequency and embedded Flash memory capabilities are also available in 0.18um to 40nm ultra-low power technologies, enabling system level integration for smaller form factors as well as facilitating wireless connections among IoT products.

    Compared with their previous low-power generations, TSMC’s ultra-low power processes can further reduce operating voltages by 20% to 30% to lower both active power and standby power consumption and enable significant increases in battery life—by 2X to 10X—when much smaller batteries are demanded in IoT/wearable applications.

    The focus on power is quite evident and this means that all of the power management and reduction techniques used in advanced node designs will be coming to legacy nodes soon.

    Integration and miniaturization are being pursued from the system-level in, as well as from the process side. Techniques for power reduction and system energy efficiency are central to innovations under way.  For mixed-signal program teams, this means there is an added dimension of complexity in the verification task. If this dimension is not methodologically addressed, the level of risk adds a new dimension as well.

    Reason 3: Trends are pushing the limits of established design practices

    Risk is the bane of every engineer, but without risk there is no progress. And, sometimes the amount of risk is not something that can be controlled. Figure 3 shows some of the forces at work that cause design teams to undertake more risk than they would ideally like. With price and form factor as primary value elements in many growing markets, integration of analog front-end (AFE) with digital processing is becoming commonplace.  

     

    Figure 3.  Trends pushing mixed-signal out of equilibrium

    The move to the sweet spot of manufacturing at 28nm enables more integration, while providing excellent power and performance parameters with the best cost per transistor. Variation becomes great and harder to control. For analog design, this means more digital assistance for calibration and compensation. For greatest flexibility and resiliency, many will opt for embedding a microcontroller to perform the analog control functions in software. Finally, the first wave of leaders have already crossed the methodology bridge into true mixed-signal design and verification; those who do not follow are destined to fall farther behind.

    Reason 4: The tipping point accelerants are catching fire

    The factors cited in Reason 3 all have a technical grounding that serves to create pain in the chip-development process. The more factors that are present, the harder it is to ignore the pain and get the treatment relief  afforded by adopting known best practices for truly mixed-signal design (versus divide and conquer along analog and digital lines design).

    In the past design performance was measured in MHz with simple static timing and power analysis. Design flows were conveniently partitioned, literally and figuratively, along analog and digital boundaries. Today, however, there are gigahertz digital signals that interact at the package and board level in analog-like ways. New, dynamic power analysis methods enabled by advanced library characterization must be melded into new design flows. These flows comprehend the growing amount of feedback between analog and digital functions that are becoming so interlocked as to be inseparable. This interlock necessitates design flows that include metrics-driven and software-driven testbenches, cross fabric analysis, electrically aware design, and database interoperability across analog and digital design environments.


    Figure 4.  Tipping point indicators

    Energy efficiency is a universal driver at this point.  Be it cost of ownership in the data center or battery life in a cell phone or wearable device, using less power creates more value in end products. However, layering multiple energy management and optimization techniques on top of complex mixed-signal designs adds yet more complexity demanding adoption of “modern” mixed-signal design practices.

    Reason 5: Convergence of analog and digital design

    Divide and conquer is always a powerful tool for complexity management.  However, as the number of interactions across the divide increase, the sub-optimality of those frontiers becomes more evident. Convergence is the name of the game.  Just as analog and digital elements of chips are converging, so will the industry practices associated with dealing with the converged world.


    Figure 5. Convergence drivers

    Truly mixed-signal design is a discipline that unites the analog and digital domains. That means that there is a common/shared data set (versus forcing a single cockpit or user model on everyone). 

    In verification the modern saying is “start with the end in mind”. That means creating a formal approach to the plan of what will be test, how it will be tested, and metrics for success of the tests. Organizing the mechanics of testbench development using the Unified Verification Methodology (UVM) has proven benefits. The mixed-signal elements of SoC verification are not exempted from those benefits.

    Competition is growing more fierce in the world for semiconductor design teams. Not being equipped with the best-known practices creates a competitive deficit that is hard to overcome with just hard work. As the landscape of IC content drives to a more energy-efficient mixed-signal nature, the mounting risk posed by old methodologies may cause causalities in the coming year. Better to move forward with haste and create a position of strength from which differentiation and excellence in execution can be forged.

    Summary

    2015 is going to be a banner year for mixed-signal design and verification methodologies. Those that have forged ahead are in a position of execution advantage. Those that have not will be scrambling to catch up, but with the benefits of following a path that has been proven by many market leaders.



    • uvm
    • mixed signal design
    • Metric-Driven-Verification
    • Mixed Signal Verification
    • MDV-UVM-MS

    cat

    Top 5 Issues that Make Things Go Wrong in Mixed-Signal Verification

    Key Findings:  There are a host of issues that arise in mixed-signal verification.  As discussed in earlier blogs, the industry trends indicate that teams need to prepare themselves for a more mixed world.  The good news is that these top five pitfalls are all avoidable.

    It’s always interesting to study the human condition.  Watching the world through the lens of mixed-signal verification brings an interesting microcosm into focus.  The top 5 items that I regularly see vexing teams are:

    1. When there’s a bug, whose problem is it?
    2. Verification team is the lightning rod
    3. Three (conflicting) points of view
    4. Wait, there’s more… software
    5. There’s a whole new language

    Reason 1: When there’s a bug, whose problem is it?

    It actually turns out to be a good thing when a bug is found during the design process.  Much, much better than when the silicon arrives back from the foundry of course. Whether by sheer luck, or a structured approach to verification, sometimes a bug gets discovered. The trouble in mixed-signal design occurs when that bug is near the boundary of an analog and a digital domain.


    Figure 1.   Whose bug is it?

    Typically designers are a diligent sort and make sure that their block works as desired. However, when things go wrong during integration, it is usually also project crunch time. So, it has to be the other guy’s bug, right?

    A step in the right direction is to have a third party, a mixed-signal verification expert, apply rigorous methods to the mixed-signal verification task.  But, that leads to number 2 on my list.

     

    Reason 2: Verification team is the lightning rod

    Having a dedicated verification team with mixed-signal expertise is a great start, but what can typically happen is that team is hampered by the lack of availability of a fast executing model of the analog behavior (best practice today being a SystemVerilog real number model – SV_RNM). That model is critical because it enables orders of magnitude more tests to be run against the design in the same timeframe. 

    Without that model, there will be a testing deficit. So, when the bugs come in, it is easy for everyone to point their finger at the verification team.


    Figure 2.  It’s the verification team’s fault

    Yes, the model creates a new validation task – it’s validation – but the speed-up enabled by the model more than compensates in terms of functional coverage and schedule.

    The postscript on this finger-pointing is the institutionalization of SV-RNM. And, of course, the verification team gets its turn.


    Figure 3.  Verification team’s revenge

     

    Reason 3: Three (conflicting) points of view

    The third common issue arises when the finger-pointing settles down. There is still a delineation of responsibility that is often not easy to achieve when designs of a truly mixed-signal nature are being undertaken.  


    Figure 4.  Points of view and roles

    Figure 4 outlines some of the delegated responsibility, but notice that everyone is still potentially on the hook to create a model. It is questions of purpose, expertise, bandwidth, and convention that go into the decision about who will “own” each model. It is not uncommon for the modeling task to be a collaborative effort where the expertise on analog behavior comes from the analog team, while the verification team ensures that the model is constructed in such a manner that it will fit seamlessly into the overall chip verification. Less commonly, the digital design team does the modeling simply to enable the verification of their own work.

    Reason 4: Wait, there’s more… software

    As if verifying the function of a chip was not hard enough, there is a clear trend towards product offerings that include software along with the chip. In the mixed-signal design realm, many times this software has among its functions things like calibration and compensation that provide a flexible way of delivering guards against parameter drift. When the combination of the chip and the software are the product, they need to be verified together. This puts an enormous premium on fast executing SV-RNM.

     


    Figure 5.   There’s software analog and digital

    While the added dimension of software to the verification task creates new heights of complexity, it also serves as a very strong driver to get everyone aligned and motivated to adopt best known practices for mixed-signal verification.  This is an opportunity to show superior ability!


    Figure 6.  Change in perspective, with the right methodology

     

    Reason 5: There’s a whole new language

    Communication is of vital importance in a multi-faceted, multi-team program.  Time zones, cultures, and personalities aside, mixed-signal verification needs to be a collaborative effort.  Terminology can be a big stumbling block in getting to a common understanding. If we take a look at the key areas where significant improvement can usually be made, we can start to see the breadth of knowledge that is required to “get” the entirety of the picture:

    • Structure – Verification planning and management
    • Methodology – UVM (Unified Verification Methodology – Accellera Standard)
    • Measure – MDV (Metrics-driven verification)
    • Multi-engine – Software, emulation, FPGA proto, formal, static, VIP
    • Modeling – SystemVerilog (discrete time) down to SPICE (continuous time)
    • Languages – SystemVerilog, Verilog, Verilog-AMS, VHDL, SPICE, PSL, CPF, UPF

    Each of these areas has its own jumble of terminology and acronyms. It never hurts to create a team glossary to start with. Heck, I often get my LDO, IFV, and UDT all mixed up myself.

    Summary

    Yes, there are a lot of things that make it hard for the humans involved in the process of mixed-signal design and verification, but there is a lot that can be improved once the pain is felt (no pain, no gain is akin to no bugs, no verification methodology change). If we take a look at the key areas from the previous section, we can put a different lens on them and describe the value that they bring:

    • Structure – Uniformly organized, auditable, predictable, transparency
    • Methodology – Reusable, productive, portable, industry standard
    • Measure – Quantified progress, risk/quality management, precise goals
    • Multi-engine – Faster execution, improved schedule, enables new quality level
    • Modeling – Enabler, flexible, adaptable for diverse applications/design styles
    • Languages – Flexible, complete, robust, standard, scalability to best practices

    With all of this value firmly in hand, we can turn our thoughts to happier words:

    …  stay tuned for more!

     

     Steve Carlson




    cat

    Automatically Reusing an SoC Testbench in AMS IP Verification

    The complexity and size of mixed-signal designs in wireless, power management, automotive, and other fast growing applications requires continued advancements in a mixed-signal verification methodology. An SoC, in these fast growing applications, incorporates a large number of analog and mixed-signal (AMS) blocks/IPs, some acquired from IP providers, some designed, often concurrently. AMS IP must be verified independently, but this is not sufficient to ensure an SoC will function properly and all scenarios of interaction among many different AMS IP blocks at full chip / SoC level must be verified thoroughly. To reduce an overall verification cycle, AMS IP and SoC verification teams must work in parallel from early stages of the design. Easier said than done! We will outline a methodology than can help.

    AMS designers verify their IP meets required specifications by running a testbench they develop for standalone / out of-context verification. Typically, an AMS IP as analog-centric, hierarchal design in schematic, composed of blocks represented by transistor, HDL and behavioral description verified in Virtuoso® Analog Design Environment (ADE) using Spectre AMS Designer simulation. An SoC verification team typically uses UVM SystemVerilog testbech at full chip level where the AMS IP is represented with a simple digital or real number model running Xcelium /DMS simulation from the command line.

    Ideally, AMS designers should also verify AMS IP function properly in the context of full-chip integration, but reproducing an often complex UVM SystemVerilog testbench and bringing over top-level design description to an analog-centric environment is not a simple task.

    Last year, Cadence partnered with Infineon on a project with a goal to automate the reuse of a top-level testbench in AMS verification. The automation enabled AMS verification engineers to automatically configure setup for verification runs by assembling all necessary options and files from the AMS IP Virtuoso GUI and digital SoC top-level command line configurations. The benefits of this method were:

    • AMS verification engineers did not need to re-create complex stimuli representing interaction of their IP at the top level
    • Top-level verification stays external to the AMS IP verification environment and continues to be managed by the SoC verification team, but can be reused by the AMS IP team without manual overhead
    • AMS IP is verified in-context and any inconsistencies are detected earlier in the verification process
    • Improved productivity and overall verification time

    For more details, please see Infineon’s CDNLlive presentation.




    cat

    Integrating AMS IP in SoC Verification Just Got Easier

    Typically, analog designers verify their AMS IP in schematic driven, interactive environment, while SoC designers use a UVM SystemVerilog testbench ran from a command line. In our last MS blog, we talked about automation for reusing SystemVerilog testbench by analog designers in order to verify AMS IP in exactly same context as in its SoC integration, hence reducing surprises and unnecessary iterations.

    But, what about other direction: selecting proper AMS IP views for SoC Verification? Manually export netlist from Virtuoso and then manually assemble together all of the files for use with in command line driven flow? Often, there are multiple views for the same instance (RNM, analog behavioral model, transistor netlist). Which one to pick? Who is supposed to update configuration files? We often work concurrently and update the AMS IP views frequently. Obviously, manually selecting correct and most up-to-date AMS IP views for SoC Verification is tedious and error prone. Thanks to Cadence Innovation, there is a better way!

    Cadence has developed a Command-Line IP Selector (CLIPS) product as part of the Virtuoso® environment, which:

    • Bridges the gap between MS SoC command-line setup and the Virtuoso-based analog mixed-signal configuration
    • Allows seamless importing of AMS IP from the Virtuoso environment into an existing digital verification setup
    • Provides a GUI-based and command-line use model, flexible to fit into an existing design flow methodologyCLIPS reads MS SoC command (irun) files, identifies required AMS IP modules, uses Virtuoso ADE setup files to properly netlist required modules, and pulls the AMS IP out of the Virtuoso environment. All necessary files are properly extracted/prepared and package as required for the MS SoC command line verification run. CLIPS setup can be saved and rerun as a batch process to ensure the latest IP from the hierarchy is being simulated.

    For more details, please see CLIPS Rapid Adoption Kit at Cadence Online Support page





    cat

    Apache Vulnerabilities Spotted In OpenWhisk And Tomcat




    cat

    BlackBerry Users Get Free Remote Wipe, Backup And Location






    cat

    Dassault Systèmes and SATS Create World’s First Virtual Kitchen for In-Flight Catering Production

    •Dassault Systèmes collaborated with SATS, Asia’s leading food solutions and gateway services provider, to boost operational efficiency, minimize food waste •Growth in airline passenger travel underscores need for sustainable excellence in aerospace industry-related commercial services •Digital twin experience with the 3DEXPERIENCE platform bridges the gap between the virtual and real for in-flight catering production






    cat

    T23-2020 Notification regarding BIOVIA Pipeline Pilot Chemistry 2019 Hot Fix 3

    BIOVIA Pipeline Pilot Chemistry SDK 2019




    cat

    T24-2020 Notification regarding BIOVIA Pipeline Pilot Chemistry 2020 Hot Fix 1

    BIOVIA Pipeline Pilot Chemistry SDK 2020










    cat

    Suricata IDPE 4.1.5

    Suricata is a network intrusion detection and prevention engine developed by the Open Information Security Foundation and its supporting vendors. The engine is multi-threaded and has native IPv6 support. It's capable of loading existing Snort rules and signatures and supports the Barnyard and Barnyard2 tools.




    cat

    Suricata IDPE 5.0.0

    Suricata is a network intrusion detection and prevention engine developed by the Open Information Security Foundation and its supporting vendors. The engine is multi-threaded and has native IPv6 support. It's capable of loading existing Snort rules and signatures and supports the Barnyard and Barnyard2 tools.




    cat

    Suricata IDPE 5.0.1

    Suricata is a network intrusion detection and prevention engine developed by the Open Information Security Foundation and its supporting vendors. The engine is multi-threaded and has native IPv6 support. It's capable of loading existing Snort rules and signatures and supports the Barnyard and Barnyard2 tools.




    cat

    Suricata IDPE 5.0.2

    Suricata is a network intrusion detection and prevention engine developed by the Open Information Security Foundation and its supporting vendors. The engine is multi-threaded and has native IPv6 support. It's capable of loading existing Snort rules and signatures and supports the Barnyard and Barnyard2 tools.




    cat

    Qualys Security Advisory - OpenBSD Authentication Bypass / Privilege Escalation

    Qualys has discovered that OpenBSD suffers from multiple authentication bypass and local privilege escalation vulnerabilities.