rif

Michelle Darnell named director of Smeal's new Tarriff Center

Michelle R. Darnell, associate clinical professor in management and director of honor and integrity at Smeal, has been appointed as the inaugural director of the Tarriff Center for Business Ethics and Social Responsibility.




rif

Amid Confusion, Feds Seek to Clarify Online Learning for Special Education Students

The Education Department says federal law should not be used to prevent schools from offering online learning to all students, including those with disabilities.




rif

Clarifying Ed-Tech Research




rif

Glorifying God through an unexpected gift

An OMer in Kazakhstan tells of a Kazakh friend and believer who, finding herself pregnant again before she is ready, wrestles with cultural norms.




rif

Clarifications on Medigap

August 13, 2019  – The Delaware Department of Insurance has received inquiries from Medicare eligible citizens concerning  misunderstandings or misinformation they’ve received from some insurance agents and brokers regarding the Medicare Access and CHIP Reauthorization Act of 2015 (MACRA).  As this act is rather complex, it has not been determined if consumers have been misinformed, or […]




rif

DTH tariff order: Operators see surge in subscriber base, revenue per user

Post the NTO implementation, the dynamics have tilted in favour of the broadcasters.




rif

After hours: Naveen Anand, Senior Director, Marketing, Oriflame South Asia

I am very organised; my entire day is well planned. After a quick short walk in the morning, I reach office by 8.30 am. I like to review what I wish to accomplish at the beginning of my day and schedule all the meetings beforehand.




rif

Vanessa Bryant Sues LA County Sheriff's Department Over Crash Site Photos

Kobe and Gianna Bryant were among nine people who died in the January 26 helicopter crash in the mountains west of Los Angeles.




rif

Delaware growers first on East Coast to get new tool to help with pesticide drift

Delaware growers are the first on the East Coast able to take advantage of a new online tool that helps protect sensitive crops from pesticides that may drift due to wind or weather. Delaware is the newest participant in the DriftWatch program, which allows growers of certain crops or commercial beekeepers to alert pesticide applicators of sensitive areas before they spray.



  • Department of Agriculture
  • News

rif

CBDT clarifies on TDS deduction by employers on old/new regime

The Budget this year introduced a new income tax regime under which individuals can opt for lower tax rates provided they don’t avail certain exemptions and deductions otherwise available under the law.




rif

New Tariff of the Indian Maharaja-Deccan Odyssey

Operators of Indian Maharaja-Deccan Odyssey Train, Travel Corporation India Ltd (TCI) have come up with the new tariff for the Indian Maharaja Train to target domestic travelers. On the launch of the new season of the Indian Maharaja-Deccan Odyssey the managing director Mr. Madhavan Menon said...



  • Wed
  • 21 Sep 2011 00:00:00

rif

MFIs see demand picking up soon due to emergency loan requirement, kharif cultivation

Microfinance institutions (MFIs) believe the industry will bounce back in no time as they see demand for microcredit rising soon because of emergency loan requirements of their customers to restart businesses.




rif

Lower power tariff can boost Make in India after corona crisis

"It is the time to review the policy of cross subsidisation of power as this plays a part in making our industries non-competitive,” former finance secretary Subhash Chandra Garg said at an online panel discussion on the power sector.




rif

Hyundai Motors to showcase electrified prototype displaying this concept

The all-new i10 will receive its public world premiere, alongside a special version of the model




rif

CAA protests: Those born here before 1987 are Indians, clarifies government

A top government official said according to the 2004 amendments of the Citizenship Act, if one parent is an Indian and neither is an illegal immigrant, a person is considered an Indian citizen.




rif

Trade meeting result: US to take U-turn on car tariffs for now

The Trump administration will hold off for now on imposing new tariffs on automobile imports as top officials weigh revisions to a report on the national security implications, according to two people familiar with the matter.




rif

Amazon’s deep bench calms investors amid Jeff Bezos scandal, NYC rift

It’s been a rough few weeks for the world’s wealthiest man.




rif

Maiden round-the-clock renewable energy auction gets tariff of Rs 2.90/unit

Since renewable energy is unpredictable in nature, other sources of energy are required for balancing it, increasing the actual cost of such power, dissuading state-run power distribution companies from buying such electricity.




rif

Central Government Employee? Modi govt makes new clarification on travelling allowance

Central Government Employees Travelling Allowance: This order was issued after consultation with the Comptroller and Auditor General of India, as mandated under Article 148(5) of the Constitution.




rif

After flip-flops, IndiGo clarifies pay cut for senior employees will be for entire 2020-21

The country's largest domestic airlines had on Friday announced pay cut ranging between 5 and 25 per cent, in addition to its leave-without-pay programme for May, June and July, for senior employees.




rif

Clarification About Indian Rupee Demonetisation Of Notes

Answers To Indian Rupee Demonetization Of Rs 500, 1000 currency Notes




rif

Overcast and Windy and 37 F at Griffiss Air Force Base / Rome, NY


Winds are from the West at 27.6 gusting to 35.7 MPH (24 gusting to 31 KT). The pressure is 1010.6 mb and the humidity is 50%. The wind chill is 25. Last Updated on May 9 2020, 11:53 am EDT.




rif

How to Verify Performance of Complex Interconnect-Based Designs?

With more and more SoCs employing sophisticated interconnect IP to link multiple processor cores, caches, memories, and dozens of other IP functions, the designs are enabling a new generation of low-power servers and high-performance mobile devices. The complexity of the interconnects and their advanced configurability contributes to already formidable design and verification challenges which lead to the following questions:

While your interconnect subsystem might have a correct functionality, are you starving your IP functions of the bandwidth they need? Are requests from latency-critical initiators processed on time? How can you ensure that all applications will receive the desired bandwidth in steady-state and corner use-cases?

To answer these questions, Cadence recommends the Performance Verification Methodology to ensure that the system performance meets requirements at the different levels:

  1. Performance characterization: The first level of verification aims to verify the path-to-path traffic measuring the performance envelope. It targets integration bugs like clock frequency, buffer sizes, and bridge configuration. It requires to analyze the latency and bandwidth of design’s critical paths.
  2. Steady state workloads: The second level of verification aims to verify the master-by-master defined loads using traffic profiles. It identifies the impact on bandwidth when running multi-master traffic with various Quality-of-Service (QoS) settings. It analyzes the DDR sub-system’s efficiency, measures bandwidth and checks whether masters’ QoS requirements are met.
  3. Application specific use cases: The last level of verification simulates the use-cases and reaches the application performance corner cases. It analyzes the master-requested bandwidth as well as the DDR sub-system’s efficiency and bandwidth.

Cadence has developed a set of tools to assist customers in performance validation of their SoCs. Cadence Interconnect Workbench simplifies the setup and measurement of performance and verification testbenches and makes debugging of complex system behaviors a snap. The solution works with Cadence Verification IPs and executes on the Cadence Xcelium® Enterprise Simulator or Cadence Palladium® Accellerator/Emulator, with coverage results collected and analyzed in the Cadence vManager  Metric-Driven Signoff Platform.

To verify the performance of the Steady State Workloads, Arm has just released a new AMBA Adaptive Traffic Profile (ATP) specification which describes AMBA abstract traffic attributes and defines the behavior of the different traffic profiles in the system.

With the availability of Cadence Interconnect Workbench and AMBA VIP support of ATP, early adopters of the AMBA ATP specification can begin working immediately, ensuring compliance with the standard, and achieving the fastest path to SoC performance verification closure.

For more information on the AMBA Adaptive Traffic Profile, you can visit Dimitry's blog on AMBA Adaptive Traffic Profiles: Addressing The Challenge

More information on Cadence Interconnect Workbench solution is available at Cadence Interconnect Solution webpage.

Thierry




rif

Dimensions to Verifying a USB4 Design

Verification of a USB4 router design is not just about USB4 but also about the inclusion of the three other major protocols namely, USB3, DisplayPort (DP), and PCI Express (PCIe). These protocols can be simultaneously tunneled through a USB4 router. Put in simple terms, such tunneling involves the conversion of the respective native USB3, DP, or PCIe protocol traffic into the USB4 transport layer packets, which are tunneled through a USB4 fabric, and converted back into the respective original native protocol traffic.

It may sound simple but is perhaps not.

There are several aspects in a router that come into picture to carry out this task of conversion of native protocol traffic, route it to the intended destination, and then convert it back to the original form. Some of those are the USB3, DP and PCIe protocol adapters, transport mechanism using routing, flow control, paths, path set-up and teardown, control and configuration, configuration spaces.

That is not all. There are core USB4 specific logical layer intricacies as well, which carry out the tasks of ensuring that all the USB4 ports and links are working as desired to provide up to 40Gbps speed and that the USB4 traffic flows through out the fabric in the intended way. These bring on the table features like High Speed link, ordered sets, lane initialization, lane adapter state machine, low power, lane bonding, RS-FEC, side band channel, sleep and wake, error checking.

All of these put together give rise to a very large verification space against which a USB4 router design should be verified. If we were to break down this space it can be broadly put in the following major dimensions,

  • Protocol Adapter Layer
    • USB3 tunneling
    • DP tunneling
    • PCIe tunneling
  • Host Interface Adapter Layer
  • Transport Layer
    • Flow control
    • Routing
    • Paths
  • Configuration layer and control packet protocol
  • Configuration spaces
  • Logical Layer

The independent verification of these dimensions is not all that would qualify the design as verified. They have to be verified in various combinations of each other too. Overall, all the parts of a USB4 router system need to be working together coherently.

For example, the following diagram depicts the various layers that a USB4 router may comprise of,

A USB4 router or a domain of routers does not work on its own. There is a Connection Manager per domain, which is a software-based entity managing a domain. A router provides the various capabilities for a Connection Manager to carry out its responsibilities of managing a domain.

It would not be an exaggeration to say that the spectrum of verification of a USB4 router ranges from the very minute details of logical layer to the system-level like multiple dependencies as the whole USB4 system is brought up layer by layer, step-by-step.

Cadence has a mature Verification IP solution that can help in the verification of USB4 designs. Cadence has taken an active part in the working group that defined the USB4 specification and has created a comprehensive Verification IP that is being used by multiple members in the last two years.

If you plan to have a USB4 compatible design, you can reduce the risk of adopting a new technology by using our proven and mature USB4 Verification IP. Please contact your Cadence local account team for more details and to get connected.




rif

Verification of the Lane Adapter FSM of a USB4 Router Design Is Not Simple

Verifying lane adapter state machine in a router design is quite an involved task and needs verification from several aspects including that for its link training functionality.

The diagram below shows two lane adapters connected to each other and each going through the link training process. Each training sub-state transition is contingent on conditions for both transmission and reception of relevant ordered sets needed for a transition. Until conditions for both are satisfied an adapter cannot transition to the next training sub-state.

As deduced from the lane adapter state machine section of USB4 specification, the reception condition for the next training sub-state transition is less strict than that of the transmission condition. For ex., for LOCK1 to LOCK2 transition, the reception condition requires only two SLOS symbols in a row being detected, while the transmission condition requires at least four complete SLOS1 ordered sets to be sent.

From the above conditions in the specification, it is a possibility that a lane adapter A may detect the two SLOS or TS ordered sets, being sent by the lane adapter B on the other end, in the very beginning as soon as it starts transmitting its own SLOS or TS ordered sets. On the other hand, it is also a possibility that these SLOS or TS ordered sets are not yet detected by lane adapter A even when it has met the condition of sending minimum number of SLOS or TS ordered sets.

In such a case, lane adapter A, even though it has satisfied the transmission condition cannot transition to the next sub-state because the reception condition is not yet met. Hence lane adapter A must first wait for the required number of ordered sets to be detected by it before it can go to the next sub-state. But this wait cannot be endless as there are timeouts defined in the specification, after which the training process may be re-attempted.

This interlocked way of operation also ensures that state machine of a lane adapter does not go out of sync with that of the other lane adapter. Such type of scenarios can occur whenever lane adapter state machine transitions to the training state from other states.

Cadence has a mature Verification IP solution for the verification of various aspects of the logical layer of a USB4 router design, with verification capabilities provided to do a comprehensive verification of it.




rif

Cadence JasperGold Brings Formal Verification into Mainstream IC Verification Flows

Formal verification is a complex technology that has traditionally required experts or specialized teams who stood apart from the IC design and verification flow. Taking a different approach, a new release of the Cadence JasperGold formal verification platform (June 8, 2015) provides formal techniques that complement simulation, emulation, and debugging in the form of “Apps” or under-the-hood solutions that any design or verification engineer can use.

JasperGold was the initial (in fact only) product of Jasper Design Automation, acquired by Cadence in 2014. Jasper pioneered the formal Apps concept several years ago. While the company had previously sold JasperGold as a one-size-fits-all solution, Jasper began selling semi-automated JasperGold Apps that solved specific problems using formal analysis technology.

The new release is the next generation of JasperGold and will be available later this month. It includes three major improvements over previous Cadence and Jasper formal analysis offerings:

  • A unified Cadence Incisive and JasperGold formal verification platform delivers up to 15X performance gain over previous solutions.
  • JasperGold is integrated into the Cadence System Development Suite, where it provides formal-assisted simulation, emulation, and coverage. As a result, System Development Suite users can find bugs three months earlier than existing verification methods.
  • JasperGold’s formal analysis engines are integrated with the recently announced Indago debug platform, automating root cause analysis and on-the-fly, what-if exploration.

Best of Both Formal Verification Worlds

Taking advantage of technologies from both Cadence and Jasper, the new JasperGold represents a “best of both worlds” solution, according to Pete Hardee, product management director at Cadence. This solution combines technologies from the Cadence Incisive Enterprise Verifier and Incisive Formal Verifier with JasperGold formal analysis engines.

For example, to ease migration from Incisive formal tools, Cadence has integrated an Incisive common front end into the JasperGold apps platform. Jasper formal engines can run within the Incisive run-time environment. Cadence has also brought some selected Incisive formal engines into JasperGold.

As shown to the right, the JasperGold platform supports both the existing JasperGold front-end parser and the Incisive front-end parser. Hardee observed that this dual parser arrangement simplifies migration from Incisive formal tools to JasperGold, and provides a common compilation environment for people who want to use JasperGold with Incisive simulation. Further, the common run-time environment enables formal-assisted simulation.

The combination of JasperGold engines and Incisive engines supports two use models for formal analysis: formal proofs and bug hunting. In the first case, formal engines try all combinations of inputs without a testbench. The test is driven by formal properties written in languages such as SVA (SystemVerilog assertions) or PSL (Property Specification Language). Completion of a property is exhaustive proof that something can or cannot happen. This provides a “much stronger result” than simulation, Hardee said.

He also noted that formal analysis doesn’t necessarily require that all properties are completed. “You can get a lot of value even if proofs don’t complete,” he said. “Proofs that run deep enough to find bugs are just fine.”

Bug hunting involves random searches, and JasperGold bug hunting engines are very fast. However, these engines don’t necessarily use the most optimal path to get to a bug. So, Cadence engineers brought a constraint solver from Incisive and integrated it into JasperGold. “It looks at the constraints in the environment and gives you a better starting point,” Hardee said. “It takes more up-front time, but once you’ve done that the bug hunting engines can actually take a shorter path and find a bug a lot quicker.”

Another new JasperGold capability from the Incisive Formal Verifier is called “search pointing.” This uses simulation to penetrate deeply into the state space, and then kicks off a random formal search from a given point that you’ve reached in simulation. This technique makes it possible to find bugs that are very deep in the design.

It is probably clear by now that a number of different formal “engines” may be required to solve a given verification problem. Traditionally, a formal tool (or user) will farm a problem out to many engines and see which one works best. To put more intelligence into that process, Cadence launched the Trident “multi-cooperating engine” a couple of years ago. That has now been brought into JasperGold, where it helps “orchestrate” the engines according to what will work best for the design. This is a big part of the reason for the 15X speedup noted earlier in this post.

Integration with System Development Suite

The Cadence System Development Suite is an integrated set of hardware/software development and verification engines, including virtual prototyping, Incisive simulation, emulation, and FPGA-based prototyping. As shown below, JasperGold technology is integrated into the System Development Suite in several places, including formal-assisted debug, formal-assisted verification closure, formal-assisted simulation, formal-assisted emulation, and the Incisive vManager verification planning tool.

Formal-assisted emulation sounds like it should be easy, especially since Cadence has both accelerated verification IP (VIP) and assertion-based VIP. However, there’s a complication. Accelerated VIP represents less verification content than simulation VIP, because you have to remove many checkers to get VIP to compile on a Palladium emulator. That’s because the Palladium requires synthesizable code.

What you can do, however, is use assertion-based VIP in “snoop mode” as shown below. Assertion-based VIP coded in synthesizable SystemVerilog can replace the missing checkers in accelerated VIP. In this diagram, everything in the green box is running in the emulator and is thus completely accelerated.

 

Another example of formal-assisted emulation has to do with deep traces. As Hardee noted, emulation will produce very long traces, and it can be very difficult to find a point of interest in the trace and determine what caused an error. With formal-assisted emulation, users can find interesting events within the traces and create properties that mark them, so a debugger can find these events and trace back to the root cause.

Formal-assisted verification closure is available with the new JasperGold release. This is possible because you can use the vManager product to determine which tasks were completed by formal engines. It’s important information for verification managers who are not used to formal tools, Hardee noted.

Another aspect of formal-assisted verification closure is the JasperGold Unreachability Analysis (UNR) App, which can save simulation users weeks of time and effort. This App takes in the simulation coverage database and RTL, and automatically generates properties to explore coverage holes and determine if holes are reachable or unreachable. The App then generates an unreachable coverage point database. If the unreachable code does something useful, there’s a bug in the design or the testbench; if not, you don’t have to worry about it. The diagram below shows how it works.

Formal-Assisted Debugging

The third major component of the JasperGold announcement is the integration of formal analysis into the Indago debugging platform. As shown below, this platform has several apps, including the Indago Debug Analyzer. Two formal debug capabilities from the Jasper Visualize environment have been added to the the Indago Debug Analyzer:

  • Highlight Relevant Logic: This highlights the “cone of influence,” or the logic that is involved in reaching a given point
  • Why: This button highlights the immediate causes for a given event, and allows users to trace backwards in time

 

More formal capabilities will come with the Indago Advanced Debug Analyzer app, scheduled for release towards the end of 2015. This includes Quiet Trace, a Jasper capability that reduces trace activity to transactions relevant to an event. Also, a what-if analysis allows on-the-fly trace editing and recalculation to explore effects and sensitivities, without having to re-compile and re-execute the simulation.

Finally, Cadence has a Superlint flow that is now fully integrated with the JasperGold Visualize debugger. This two-tiered flow includes a basic lint capability as well as automated formal analysis based on the JasperGold Structural Property Synthesis app. “This could be a very good entry point for designers to start using formal,” Hardee said.

“Formal is taking off,” Hardee concluded. “People are no longer talking about return on investment for formal—they have established that. Now they’re supporting a proliferation of formal in their companies such that a wider set of people experience the benefit from that proven return on investment.”

Further information is available at the JasperGold Formal Verification Platform (Apps) page.

Richard Goering

Related Blog Posts

JUG Keynote—How Jasper Formal Verification Technology Fits into the Cadence Flow

Why Cadence Bought Jasper—A New Era in Formal Analysis

Q&A: An R&D Perspective on Formal Verification—Past, Present and Future




rif

checkRoute or VerifyConnectivity

Hello Everyone,

I was finishing the layout via Innovus and ran verifyConnectivity followed by checkRoute.

verifyConnectivity was okay and it showed no errors and no warnings, whereas checkRoute showed there are 3 unrouted nets.

When i ran the checkRoute command again immediately, it showed no unrouted/unconnected nets.

Which of these commands should we trust or is this really unrouted nets issue?

Looking forward for a response, thanks in advance.

Regards,

Vijay




rif

Mediatek Deploys Perspec for SoC Verification of Low Power Management (part 3 of 3)

Here we conclude the blog series and highlight the results of Mediatek 's use of Cadence Perspec™ System Verifier for their SoC level verification. In case you missed it, Part 1 of the blog is here , and Part 2 of the blog is here . One of their key...(read more)




rif

Perspec System Verifier is #1 in Portable Stimulus in 2017 User Survey

It’s now official: Perspec System Verifier is rated the #1 product in the #1 category of Portable Stimulus, according to the 2017 EDA User Survey published on Deepchip.com. There were 33 user responses in favor of Perspec as the #1 tool, and dr...(read more)




rif

Cadence Collaborates with Test & Verification Solutions on Portable Stimulus

The Cadence® Connections® Verification Program brings together a worldwide network of services, training, and IP development experts that support Cadence verification solutions. The program members help customer accelerate the adoption of new...(read more)




rif

What’s Hot in Verification at this Year’s CDNLive? It’s Portable Stimulus Again!

CDNLive is a user conference, and verification is one of the largest categories of content with multiple tracks covering multiple days. Portable stimulus is one of the hottest new areas in verification, and continues to be popular in all venues. At l...(read more)




rif

Integration and Verification of PCIe Gen4 Root Complex IP into an Arm-Based Server SoC Application

Learn about the challenges and solutions for integrating and verification PCIe(r) Gen4 into an Arm-Based Server SoC. Listen to this relatively short webinar by Arm and Cadence, as they describe the collaboration and results, including methodology and...(read more)




rif

Willamette HDL and Cadence Develop the Industry's First PSS Training Course for Perspec System Verifier

Cadence continues to be a leader in SoC verification and has expanded our industry investment in Accellera portable stimulus language standardization. Some customers have expressed reservations that portable stimulus requires the effort of learn...(read more)




rif

Verification Reflections on 2018

In my predictions for 2018 I had identified five key trends driving verification in 2018 – Security, Safety, Application Specificity, Processor Ecosystems and System Design Enablement, all centered around ecosystems. Looking back now as the yea...(read more)




rif

Metamorphic Testing: The Future of Verification?

Curious about what’s going on behind the scenes with verification? Bernard Murphy, Jim Hogan, and our own Paul Cunningham are on the case with the “Innovation in Verification” blog stream over at semiwiki.com. Every month, this trio reviews a newly-published paper in academia that pertains to verification and discusses its implications. Be sure to stop by—it’s a great place to see what might be coming down the pipeline someday.

This month, they discuss the implications of metamorphic testing. The purpose of metamorphic testing is to define a verification approach where is there is no “golden reference.” This situation comes up a lot now as designs grow in complexity, and it begs the question: “how does one know the design is verified if there is no standard to compare to?”. Metamorphic testing addresses the problem of not having a “gold standard” to compare to by comparing the results of related tests instead. The paper reviewed by this team used metamorphic testing to study methods of managing JavaScript tags.

Paul saw this as a valuable new class of coverage. Metamorphic testing represents a way to create better distribution analyses through understanding the relationships among tests. This can reveal critical-but-complex issues that traditional verification methods may overlook. He saw this as an emerging class of coverage that new verification tools could be built around. Paul asserted that a future metamorphic-testing-based tool’s main contribution to the field of verification would be to better analyze noisy performance results where the noise is multi-modal. It could be useful in detecting race conditions and similar hard-to-debug anomalies. Paul also sees metamorphic testing as ripe for ML techniques. Overall—Paul sees a bright future for metamorphic testing in verification.

Jim is reminded of Solido and Spice—these metamorphic testing capabilities are “more than just a feature”—they might be a product. Maybe even a whole new class of verification tools, as Paul said.

Bernard says that this topic is “too rich to address in one blog”, so be sure to head over to the post to see more of what the future has in store for verification.




rif

New Rapid Adoption Kit (RAK) Enables Productive Mixed-Signal, Low Power Structural Verification

All engineers can enhance their mixed-signal low-power structural verification productivity by learning while doing with a PIEA RAK (Power Intent Export Assistant Rapid Adoption Kit). They can verify the mixed-signal chip by a generating macromodel for their analog block automatically, and run it through Conformal Low Power (CLP) to perform a low power structural check.  

The power structure integrity of a mixed-signal, low-power block is verified via Conformal Low Power integrated into the Virtuoso Schematic Editor Power Intent Export Assistant (VSE-PIEA). Here is the flow.

 

Applying the flow iteratively from lower to higher levels can verify the power structure.

Cadence customers can learn more in a Rapid Adoption Kit (RAK) titled IC 6.1.5 Virtuoso Schematic Editor XL PIEA, Conformal Low Power: Mixed-Signal Low Power Structural Verification.

The RAK includes Rapid Adoption Kit with demo design (instructions are provided on how to setup the user environment). It Introduces the Power Intent Export Assistant (PIEA) feature that has been implemented in the Virtuoso IC615 release.  The power intent extracted is then verified by calling Conformal Low Power (CLP) inside the Virtuoso environment.

  • Last Update: 11/15/2012.
  • Validated with IC 6.1.5 and CLP 11.1

The RAK uses a sample test case to go through PIEA + CLP flow as follows:

  • Setup for PIEA
  • Perform power intent extraction
  • CPF Import: It is recommended to Import macro CPF, as oppose to designing CPF for sub-blocks. If you choose to import design CPF files please make sure the design CPF file has power domain information for all the top level boundary ports
  • Generate macro CPF and design CPF
  • Perform low power verification by running CLP

It is also recommended to go through older RAKs as prerequisites.

  • Conformal Low Power, RTL Compiler and Incisive: Low Power Verification for Beginners
  • Conformal Low Power: CPF Macro Models
  • Conformal Low Power and RTL Compiler: Low Power Verification for Advanced Users

To access all these RAKs, visit our RAK Home Page to access Synthesis, Test and Verification flow

Note: To access above docs, use your Cadence credentials to logon to the Cadence Online Support (COS) web site. Cadence Online Support website https://support.cadence.com/ is your 24/7 partner for getting help and resolving issues related to Cadence software. If you are signed up for e-mail notifications, you can receive new solutions, Application Notes (Technical Papers), Videos, Manuals, and more.

You can send us your feedback by adding a comment below or using the feedback box on Cadence Online Support.

Sumeet Aggarwal




rif

New Incisive Low-Power Verification for CPF and IEEE 1801 / UPF

On May 7, 2013 Cadence announced a 30% productivity gain in the June 2013 Incisive Enterprise Simulator 13.1 release.  Advanced debug visualization, faster turn-around time, and the extension of eight years of low-power verification innovation to IEEE 1801/UPF are the key capabilities in the release.

When we talk about low-power verification its easy to equate it with simulation.  For certain, simulation is the heart of a low-power verification solution. Simulation enables engineers to run their design in the context of power intent.  The challenge is that a simulation-only approach is inadequate. For example, if engineers could achieve SoC quality by verifying the individual function of each power control module (PCM), then simulation could be enough.  For a single power domain, simulation can be enough. 

However, when the SoC has multiple power domains -- and we have seen SoCs with hundreds of them -- engineers have to check the PCMs and all of the arcs between the power modes.  These SoCs often synchronize some of the domain switching to reduce overall complexity, creating the potential for signal skew errors on the control signals for the connected domains.  Managing these complexities requires verification methodologies including advanced debug, verification planning, assertion-based verification, Universal Verification Methodology - Low Power (UVM-LP), and more (see Figure 1).

 

Figure 1:  Comprehensive Low-Power Verification 

But even advanced verification methodologies on top of simulation aren't enough.  For example, the state machine that defines the legal and illegal power mode transitions is often written in software. The speed and capacity of the Palladium emulation platform is ideal to verify in this context, and it is integrated with simulation sharing debug, UVM acceleration, and static checks for low-power. And, it reports verification progress into a holistic plan for the SoC.  Another example is the ability to compare the design in the implementation flow with the design running in simulation to make sure that what we verify is what we intend to build.

Taken together, verification across multiple engines provides the comprehensive low-power verification needed for today's advanced node SoCs.  That's the heart of this low-power verification announcement. 

Another point you may have noticed is the extension of the Common Power Format (CPF) based power-aware support in the Incisive Enterprise Simulator to IEEE 1801.  We chose to bring IEEE 1801 to simulation first because users like you sometimes need to mix vendors for regression flows.  Over time, Cadence will extend the low-power capabilities throughout its product suite to IEEE 1801.

If you are using CPF today, you already have the best low-power solution. The evidence is clear:  the upcoming IEEE 1801-2013 update includes many of the CPF features contributed to 1801/UPF to enable methodology convergence.  Since you already have those features in the CPF flow, any migration before you have a mature IEEE 1801-2013 tool flow would reduce the functionality you have today.

If you are using Unified Power Format (UPF) 1.0 today, you want to start planning your move toward the IEEE 1801-2013 standard.  A good first step would be to move to the IEEE 1801-2009 standard.  It fills holes in the earlier UPF 1.0 definition.  While it does lack key features in -2013, it is an improvement that will make the migration to -2013 easier. The Incisive 13.1 release will run both UPF 1.0 and IEEE 1801-2009 power intent today.

Over the next few weeks you'll see more technical blogs about the low-power capabilities coming in the Incisive 13.1 release.  You can also join us on June 19 for a webinar that will introduce those capabilities using the reference design supplied with the Incisive Enterprise Simulator release.

=Adam "The Jouler" Sherer

(Yes, "Sherilog" is still here.  :-) )




rif

Freescale Success Stepping Up to Low-Power Verification - Video

Freescale was a successful Incisive® simulation CPF low-power user when they decided to step up their game. In November 2013, at CDNLive India, they presented a paper explaining how they improved their ability to find power-related bugs using a more sophisticated verification flow.  We were able to catch up with Abhinav Nawal just after his presentation to capture this video explaining the key points in his paper.

Abhinav had already established a low-power simulation process using directed tests for a design with power intent captured in CPF. While that is a sound approach, it tends to focus on the states associated with each power control module and at least some of the critical power mode changes.  Since the full system can potentially exercise unforeseen combinations of power states, the directed test approach may be insufficient. Abhinav built a more complete low-power verification approach rooted in a low-power verification plan captured in Cadence® Incisive Enterprise Manager.  He still used Incisive Enterprise Simulator and the SimVision debugger to execute and debug his design, but he also added Incisive Metric Center to analyze coverage from his low-power tests and connect that data back to the low-power verification plan.  As a result, he was able to find many critical system-level corner case issues, which, left undetected, would have been catastrophic for his SoC.  In the paper, Abhinav presents some of the key problems this approach was able to find.

You can achieve results similar to Abhinav. Incisive Enterprise Simulator can generate a low-power verification plan from the power format, power-aware assertions, and it can collect power-aware knowledge.  To get started, you can use the Incisive Low-Power Simulation Rapid Adoption Kit (RAK) for CPF available on Cadence Online Support.

Just another happy Cadence low-power verification user!

Regards,

Adam "The Jouler" Sherer  

 

 




rif

Sudoku solver using Incisive Enterprise Verifier (IEV) and Assertion-Driven Simulation (ADS)

Just in time for the holidays, inside the posted tar ball is some code to solve 9x9 Sudoku puzzles with the Assertion-Driven Simulation (ADS) capability of Incisive Enterprise Verifier (IEV). Enjoy! Joerg Mueller Solutions Engineer for Team Verify




rif

Five Reasons I'm Excited About Mixed-Signal Verification in 2015

Key Findings: Many more design teams will be reaching the mixed-signal methodology tipping point in 2015. That means you need to have a (verification) plan, and measure and execute against it.

As 2014 draws to a close, it is time to look ahead to the coming years and make a plan. While the macro view of the chip design world shows that is has been a mixed-signal world for a long time, it is has been primarily the digital teams that have rapidly evolved design and verification practices over the past decade. Well, I claim that is about to change. 2015 will be a watershed year for many more design teams because of the following factors:

  • 85% of designs are mixed signal, and it is going to stay that way (there is no turning back)
  • Advanced node drives new techniques, but they will be applied on all nodes
  • Equilibrium of mixed-signal designs being challenged, complexity raises risk level
  • Tipping point signs are evident and pervasive, things are going to change
  • The convergence of “big A” and “big D” demands true mixed-signal practices

Reason 1: Mixed-signal is dominant

To begin the examination of what is going to change and why, let’s start with what is not changing. IBS reports that mixed signal accounts for over 85% of chip design starts in 2014, and that percentage will rise, and hold steady at 85% in the coming years. It is a mixed-signal world and there is no turning back!

 

Figure 1. IBS: Mixed-signal design starts as percent of total

The foundational nature of mixed-signal designs in the semiconductor industry is well established. The reason it is exciting is that a stable foundation provides a platform for driving change. (It’s hard to drive on crumbling infrastructure.  If you’re from California, you know what I mean, between the potholes on the highways and the earthquakes and everything.)

Reason 2: Innovation in many directions, mostly mixed-signal applications

While the challenges being felt at the advanced nodes, such as double patterning and adoption of FinFET devices, have slowed some from following onto to nodes past 28nm, innovation has just turned in different directions. Applications for Internet of Things, automotive, and medical all have strong mixed-signal elements in their semiconductor content value proposition. What is critical to recognize is that many of the design techniques that were initially driven by advanced-node programs have merit across the spectrum of active semiconductor process technologies. For example, digitally controlled, calibrated, and compensated analog IP, along with power-reducing mutli-supply domains, power shut-off, and state retention are being applied in many programs on “legacy” nodes.

Another graph from IBS shows that the design starts at 45nm and below will continue to grow at a healthy pace.  The data also shows that nodes from 65nm and larger will continue to comprise a strong majority of the overall starts. 


Figure 2.  IBS: Design starts per process node

TSMC made a comprehensive announcement in September related to “wearables” and the Internet of Things. From their press release:

TSMC’s ultra-low power process lineup expands from the existing 0.18-micron extremely low leakage (0.18eLL) and 90-nanometer ultra low leakage (90uLL) nodes, and 16-nanometer FinFET technology, to new offerings of 55-nanometer ultra-low power (55ULP), 40ULP and 28ULP, which support processing speeds of up to 1.2GHz. The wide spectrum of ultra-low power processes from 0.18-micron to 16-nanometer FinFET is ideally suited for a variety of smart and power-efficient applications in the IoT and wearable device markets. Radio frequency and embedded Flash memory capabilities are also available in 0.18um to 40nm ultra-low power technologies, enabling system level integration for smaller form factors as well as facilitating wireless connections among IoT products.

Compared with their previous low-power generations, TSMC’s ultra-low power processes can further reduce operating voltages by 20% to 30% to lower both active power and standby power consumption and enable significant increases in battery life—by 2X to 10X—when much smaller batteries are demanded in IoT/wearable applications.

The focus on power is quite evident and this means that all of the power management and reduction techniques used in advanced node designs will be coming to legacy nodes soon.

Integration and miniaturization are being pursued from the system-level in, as well as from the process side. Techniques for power reduction and system energy efficiency are central to innovations under way.  For mixed-signal program teams, this means there is an added dimension of complexity in the verification task. If this dimension is not methodologically addressed, the level of risk adds a new dimension as well.

Reason 3: Trends are pushing the limits of established design practices

Risk is the bane of every engineer, but without risk there is no progress. And, sometimes the amount of risk is not something that can be controlled. Figure 3 shows some of the forces at work that cause design teams to undertake more risk than they would ideally like. With price and form factor as primary value elements in many growing markets, integration of analog front-end (AFE) with digital processing is becoming commonplace.  

 

Figure 3.  Trends pushing mixed-signal out of equilibrium

The move to the sweet spot of manufacturing at 28nm enables more integration, while providing excellent power and performance parameters with the best cost per transistor. Variation becomes great and harder to control. For analog design, this means more digital assistance for calibration and compensation. For greatest flexibility and resiliency, many will opt for embedding a microcontroller to perform the analog control functions in software. Finally, the first wave of leaders have already crossed the methodology bridge into true mixed-signal design and verification; those who do not follow are destined to fall farther behind.

Reason 4: The tipping point accelerants are catching fire

The factors cited in Reason 3 all have a technical grounding that serves to create pain in the chip-development process. The more factors that are present, the harder it is to ignore the pain and get the treatment relief  afforded by adopting known best practices for truly mixed-signal design (versus divide and conquer along analog and digital lines design).

In the past design performance was measured in MHz with simple static timing and power analysis. Design flows were conveniently partitioned, literally and figuratively, along analog and digital boundaries. Today, however, there are gigahertz digital signals that interact at the package and board level in analog-like ways. New, dynamic power analysis methods enabled by advanced library characterization must be melded into new design flows. These flows comprehend the growing amount of feedback between analog and digital functions that are becoming so interlocked as to be inseparable. This interlock necessitates design flows that include metrics-driven and software-driven testbenches, cross fabric analysis, electrically aware design, and database interoperability across analog and digital design environments.


Figure 4.  Tipping point indicators

Energy efficiency is a universal driver at this point.  Be it cost of ownership in the data center or battery life in a cell phone or wearable device, using less power creates more value in end products. However, layering multiple energy management and optimization techniques on top of complex mixed-signal designs adds yet more complexity demanding adoption of “modern” mixed-signal design practices.

Reason 5: Convergence of analog and digital design

Divide and conquer is always a powerful tool for complexity management.  However, as the number of interactions across the divide increase, the sub-optimality of those frontiers becomes more evident. Convergence is the name of the game.  Just as analog and digital elements of chips are converging, so will the industry practices associated with dealing with the converged world.


Figure 5. Convergence drivers

Truly mixed-signal design is a discipline that unites the analog and digital domains. That means that there is a common/shared data set (versus forcing a single cockpit or user model on everyone). 

In verification the modern saying is “start with the end in mind”. That means creating a formal approach to the plan of what will be test, how it will be tested, and metrics for success of the tests. Organizing the mechanics of testbench development using the Unified Verification Methodology (UVM) has proven benefits. The mixed-signal elements of SoC verification are not exempted from those benefits.

Competition is growing more fierce in the world for semiconductor design teams. Not being equipped with the best-known practices creates a competitive deficit that is hard to overcome with just hard work. As the landscape of IC content drives to a more energy-efficient mixed-signal nature, the mounting risk posed by old methodologies may cause causalities in the coming year. Better to move forward with haste and create a position of strength from which differentiation and excellence in execution can be forged.

Summary

2015 is going to be a banner year for mixed-signal design and verification methodologies. Those that have forged ahead are in a position of execution advantage. Those that have not will be scrambling to catch up, but with the benefits of following a path that has been proven by many market leaders.



  • uvm
  • mixed signal design
  • Metric-Driven-Verification
  • Mixed Signal Verification
  • MDV-UVM-MS

rif

Top 5 Issues that Make Things Go Wrong in Mixed-Signal Verification

Key Findings:  There are a host of issues that arise in mixed-signal verification.  As discussed in earlier blogs, the industry trends indicate that teams need to prepare themselves for a more mixed world.  The good news is that these top five pitfalls are all avoidable.

It’s always interesting to study the human condition.  Watching the world through the lens of mixed-signal verification brings an interesting microcosm into focus.  The top 5 items that I regularly see vexing teams are:

  1. When there’s a bug, whose problem is it?
  2. Verification team is the lightning rod
  3. Three (conflicting) points of view
  4. Wait, there’s more… software
  5. There’s a whole new language

Reason 1: When there’s a bug, whose problem is it?

It actually turns out to be a good thing when a bug is found during the design process.  Much, much better than when the silicon arrives back from the foundry of course. Whether by sheer luck, or a structured approach to verification, sometimes a bug gets discovered. The trouble in mixed-signal design occurs when that bug is near the boundary of an analog and a digital domain.


Figure 1.   Whose bug is it?

Typically designers are a diligent sort and make sure that their block works as desired. However, when things go wrong during integration, it is usually also project crunch time. So, it has to be the other guy’s bug, right?

A step in the right direction is to have a third party, a mixed-signal verification expert, apply rigorous methods to the mixed-signal verification task.  But, that leads to number 2 on my list.

 

Reason 2: Verification team is the lightning rod

Having a dedicated verification team with mixed-signal expertise is a great start, but what can typically happen is that team is hampered by the lack of availability of a fast executing model of the analog behavior (best practice today being a SystemVerilog real number model – SV_RNM). That model is critical because it enables orders of magnitude more tests to be run against the design in the same timeframe. 

Without that model, there will be a testing deficit. So, when the bugs come in, it is easy for everyone to point their finger at the verification team.


Figure 2.  It’s the verification team’s fault

Yes, the model creates a new validation task – it’s validation – but the speed-up enabled by the model more than compensates in terms of functional coverage and schedule.

The postscript on this finger-pointing is the institutionalization of SV-RNM. And, of course, the verification team gets its turn.


Figure 3.  Verification team’s revenge

 

Reason 3: Three (conflicting) points of view

The third common issue arises when the finger-pointing settles down. There is still a delineation of responsibility that is often not easy to achieve when designs of a truly mixed-signal nature are being undertaken.  


Figure 4.  Points of view and roles

Figure 4 outlines some of the delegated responsibility, but notice that everyone is still potentially on the hook to create a model. It is questions of purpose, expertise, bandwidth, and convention that go into the decision about who will “own” each model. It is not uncommon for the modeling task to be a collaborative effort where the expertise on analog behavior comes from the analog team, while the verification team ensures that the model is constructed in such a manner that it will fit seamlessly into the overall chip verification. Less commonly, the digital design team does the modeling simply to enable the verification of their own work.

Reason 4: Wait, there’s more… software

As if verifying the function of a chip was not hard enough, there is a clear trend towards product offerings that include software along with the chip. In the mixed-signal design realm, many times this software has among its functions things like calibration and compensation that provide a flexible way of delivering guards against parameter drift. When the combination of the chip and the software are the product, they need to be verified together. This puts an enormous premium on fast executing SV-RNM.

 


Figure 5.   There’s software analog and digital

While the added dimension of software to the verification task creates new heights of complexity, it also serves as a very strong driver to get everyone aligned and motivated to adopt best known practices for mixed-signal verification.  This is an opportunity to show superior ability!


Figure 6.  Change in perspective, with the right methodology

 

Reason 5: There’s a whole new language

Communication is of vital importance in a multi-faceted, multi-team program.  Time zones, cultures, and personalities aside, mixed-signal verification needs to be a collaborative effort.  Terminology can be a big stumbling block in getting to a common understanding. If we take a look at the key areas where significant improvement can usually be made, we can start to see the breadth of knowledge that is required to “get” the entirety of the picture:

  • Structure – Verification planning and management
  • Methodology – UVM (Unified Verification Methodology – Accellera Standard)
  • Measure – MDV (Metrics-driven verification)
  • Multi-engine – Software, emulation, FPGA proto, formal, static, VIP
  • Modeling – SystemVerilog (discrete time) down to SPICE (continuous time)
  • Languages – SystemVerilog, Verilog, Verilog-AMS, VHDL, SPICE, PSL, CPF, UPF

Each of these areas has its own jumble of terminology and acronyms. It never hurts to create a team glossary to start with. Heck, I often get my LDO, IFV, and UDT all mixed up myself.

Summary

Yes, there are a lot of things that make it hard for the humans involved in the process of mixed-signal design and verification, but there is a lot that can be improved once the pain is felt (no pain, no gain is akin to no bugs, no verification methodology change). If we take a look at the key areas from the previous section, we can put a different lens on them and describe the value that they bring:

  • Structure – Uniformly organized, auditable, predictable, transparency
  • Methodology – Reusable, productive, portable, industry standard
  • Measure – Quantified progress, risk/quality management, precise goals
  • Multi-engine – Faster execution, improved schedule, enables new quality level
  • Modeling – Enabler, flexible, adaptable for diverse applications/design styles
  • Languages – Flexible, complete, robust, standard, scalability to best practices

With all of this value firmly in hand, we can turn our thoughts to happier words:

…  stay tuned for more!

 

 Steve Carlson




rif

Automatically Reusing an SoC Testbench in AMS IP Verification

The complexity and size of mixed-signal designs in wireless, power management, automotive, and other fast growing applications requires continued advancements in a mixed-signal verification methodology. An SoC, in these fast growing applications, incorporates a large number of analog and mixed-signal (AMS) blocks/IPs, some acquired from IP providers, some designed, often concurrently. AMS IP must be verified independently, but this is not sufficient to ensure an SoC will function properly and all scenarios of interaction among many different AMS IP blocks at full chip / SoC level must be verified thoroughly. To reduce an overall verification cycle, AMS IP and SoC verification teams must work in parallel from early stages of the design. Easier said than done! We will outline a methodology than can help.

AMS designers verify their IP meets required specifications by running a testbench they develop for standalone / out of-context verification. Typically, an AMS IP as analog-centric, hierarchal design in schematic, composed of blocks represented by transistor, HDL and behavioral description verified in Virtuoso® Analog Design Environment (ADE) using Spectre AMS Designer simulation. An SoC verification team typically uses UVM SystemVerilog testbech at full chip level where the AMS IP is represented with a simple digital or real number model running Xcelium /DMS simulation from the command line.

Ideally, AMS designers should also verify AMS IP function properly in the context of full-chip integration, but reproducing an often complex UVM SystemVerilog testbench and bringing over top-level design description to an analog-centric environment is not a simple task.

Last year, Cadence partnered with Infineon on a project with a goal to automate the reuse of a top-level testbench in AMS verification. The automation enabled AMS verification engineers to automatically configure setup for verification runs by assembling all necessary options and files from the AMS IP Virtuoso GUI and digital SoC top-level command line configurations. The benefits of this method were:

  • AMS verification engineers did not need to re-create complex stimuli representing interaction of their IP at the top level
  • Top-level verification stays external to the AMS IP verification environment and continues to be managed by the SoC verification team, but can be reused by the AMS IP team without manual overhead
  • AMS IP is verified in-context and any inconsistencies are detected earlier in the verification process
  • Improved productivity and overall verification time

For more details, please see Infineon’s CDNLlive presentation.




rif

Integrating AMS IP in SoC Verification Just Got Easier

Typically, analog designers verify their AMS IP in schematic driven, interactive environment, while SoC designers use a UVM SystemVerilog testbench ran from a command line. In our last MS blog, we talked about automation for reusing SystemVerilog testbench by analog designers in order to verify AMS IP in exactly same context as in its SoC integration, hence reducing surprises and unnecessary iterations.

But, what about other direction: selecting proper AMS IP views for SoC Verification? Manually export netlist from Virtuoso and then manually assemble together all of the files for use with in command line driven flow? Often, there are multiple views for the same instance (RNM, analog behavioral model, transistor netlist). Which one to pick? Who is supposed to update configuration files? We often work concurrently and update the AMS IP views frequently. Obviously, manually selecting correct and most up-to-date AMS IP views for SoC Verification is tedious and error prone. Thanks to Cadence Innovation, there is a better way!

Cadence has developed a Command-Line IP Selector (CLIPS) product as part of the Virtuoso® environment, which:

  • Bridges the gap between MS SoC command-line setup and the Virtuoso-based analog mixed-signal configuration
  • Allows seamless importing of AMS IP from the Virtuoso environment into an existing digital verification setup
  • Provides a GUI-based and command-line use model, flexible to fit into an existing design flow methodologyCLIPS reads MS SoC command (irun) files, identifies required AMS IP modules, uses Virtuoso ADE setup files to properly netlist required modules, and pulls the AMS IP out of the Virtuoso environment. All necessary files are properly extracted/prepared and package as required for the MS SoC command line verification run. CLIPS setup can be saved and rerun as a batch process to ensure the latest IP from the hierarchy is being simulated.

For more details, please see CLIPS Rapid Adoption Kit at Cadence Online Support page




rif

Verifying Power Intent in Analog and Mixed-Signal Designs Using Formal Methods

Analog and Mixed-signal (AMS) designs are increasingly using active power management to minimize power consumption. Typical mixed-signal design uses several power domains and operate in a dozen or more power modes including multiple functional, standby and test modes. To save power, parts of design not active in a mode are shut down or may operate at reduced supply voltage when high performance is not required. These and other low power techniques are applied on both analog and digital parts of the design. Digital designers capture power intent in standard formats like Common Power Format (CPF), IEEE1801 (aka Unified Power Format or UPF) or Liberty and apply it top-down throughout design, verification and implementation flows. Analog parts are often designed bottom-up in schematic without upfront defined power intent. Verifying that low power intent is implemented correctly in mixed-signal design is very challenging. If not discovered early, errors like wrongly connected power nets, missing level shifters or isolations cells can cause costly rework or even silicon re-spin. 

Mixed-signal designers rely on simulation for functional verification. Although still necessary for electrical and performance verification, running simulation on so many power modes is not an effective verification method to discover low power errors. It would be nice to augment simulation with formal low power verification but a specification of power intent for analog/mixed-signal blocs is missing. So how do we obtain it? Can we “extract” it from already built analog circuit? Fortunately, yes we can, and we will describe an automated way to do so!

Virtuoso Power Manager is new tool released in the Virtuoso IC6.1.8 platform which is capable of managing power intent in an Analog/MS design which is captured in Virtuoso Schematic Editor. In setup phase, the user identifies power and ground nets and registers special devices like level shifters and isolation cells. The user has the option to import power intent into IEEE1801 format, applicable for top level or any of the blocks in design. Virtuoso Power Manager uses this information to traverse the schematic and extract complete power intent for the entire design. In the final stage, Virtuoso Power Manager exports the power intent in IEEE1801 format as an input to the formal verification tool (Cadence Conformal-LP) for static verification of power intent.

Cadence and Infineon have been collaborating on the requirements and validation of the Virtuoso Power Manager tool and Low Power verification solution on real designs. A summary of collaboration results were presented at the DVCon conference in Munich, in October of 2018.  Please look for the paper in the conference proceedings for more details. Alternately, can view our Cadence webinar on Verifying Low-Power Intent in Mixed-Signal Design Using Formal Method for more information.





rif

Sentrifugo 3.2 File Upload Restriction Bypass

Sentrifugo version 3.2 suffers from a file upload restriction bypass vulnerability.





rif

Sentrifugo CMS 3.2 Cross Site Scripting

Sentrifugo CMS version 3.2 suffers from a persistent cross site scripting vulnerability.





rif

Packet Storm Exploit 2013-0813-1 - Oracle Java IntegerInterleavedRaster.verify() Signed Integer Overflow

The IntegerInterleavedRaster.verify() method in Oracle Java versions prior to 7u25 is vulnerable to a signed integer overflow that allows bypassing of "dataOffsets[0]" boundary checks. This exploit code demonstrates remote code execution by popping calc.exe. It was obtained through the Packet Storm Bug Bounty program.