ng

Aligning Components using Offset Mode in Allegro X APD

Starting SPB 23.1, in Allegro X PCB Editor and Allegro X Advanced Package Designer, you can align components by using offset mode. Earlier only spacing mode was available.

Follow these steps to Align Components using Offset Mode:

  1. Set Application Mode to Placement Edit.
  2. Drag the components that need to be aligned and right-click and choose Align Components.
  3. Now, in the Options tab, you will notice Spacing Section with Equal Offset. You can equally and individually offset the components by using the +/- buttons for increment or decrement.




ng

Introducing new 3DX Canvas in Allegro X Advanced Package Designer

Have you heard that starting SPB 23.1, Allegro Package Designer Plus (APD+) will be renamed as Allegro X Advanced Package Designer (Allegro X APD)? 

Allegro X APD offers multiple new features and enhancements on topics like Via Structures, Wirebond, Etchback, Text Wizards, 3D Canvas, and more. 

This post presents the new 3DX Canvas introduced in SPB 23.1. This can be invoked from Allegro X APD (from the menu item View > 3DX Canvas). 

Some of the key benefits of the new canvas: 

  • This canvas addresses the scale and complexity in large modern package designs. It provides highly efficient visual representation and implementation of packages. 
  • The new architecture enables high-performance 3D incremental updates by utilizing GPU for fast rendering. 

  • Real-time 3D incremental updates are supported, which means that the 3D view is in sync with all changes to the database. 

  • The new canvas provides 3D visualization support for packaging objects such as wire bonds, ball, die bump/pillar geometries, die stacks, etch back, and plating bar. 

  • This release also introduces the interactive measurement tool for a 3D view of packages. Once you open 3DX Canvas, press the Alt key and you can select the objects you want to measure. 
  • 3DX Canvas provides new 3D DRC Bond Wire Clearances with Real 3D DRC Checks. True 3D DRC in Constraint Manager has been introduced. If you open Constraint Manager, there will be a new worksheet added. Following DRC checks are supported: 
    Wire to Wire 
    Wire to Finger 
    Wire to Shape 
    Wire to Cline 
    Wire to Component




ng

How to reuse device files for existing components

Have you ever encountered ERROR(SPMHNI-67) while importing logic? If yes, you might already know that you had to export libraries of the design and make sure that paths (devpath, padpath, and psmpath) include the location of exported files.  

Starting in SPB23.1, if you go to File > Import > Logic/Netlist and click on the Other tab, you will see an option, Reuse device files for existing components. 

After selecting this option, ERROR(SPMHNI-67) will no longer be there in the log file, because the tool will automatically extract device files and seamlessly use them for newly imported data. In other words, SPB_23.1 lets you reuse the device / component definitions already in the design without first having to dump libraries manually. An excellent improvement, don’t you think?  




ng

How to allow DRCs to the surrounding objects using Etch Back option

Starting from SPB23.1, a new option, Allow DRCs to surrounding metal, has been added in the Etch-Back form to allow DRCs to the surrounding objects. form to allow DRCs to the surrounding objects.

The Allow DRCs to surrounding metal option lets you see and adjust objects instead of the current behavior, which sacrifices the width of the mask for the trace.

  • When this option is turned off, it maintains the EB mask to another object clearance.
  • When this option is enabled, it keeps the EB mask to the EM trace edge clearance and shows a DRC if the EB mask to another object spacing is out of rule.




ng

Find Routing problem (Route Vision) and quickly to fix these problems

The vision manager is good tool for routing check. but no quickly or effective  tool to fix or optimize this  problems to be optimized.

For example, parallel Gap less than preferred, min seg/Arc length,uncoupled diff-pair segs,and so on.

I only know use spread between voids to fix the non-optimized segs. in fact it is inefficient.

the parallel gap less than preferred is only to slice evry trace, its inefficient.

If i set the paraller gap less than 50um, Is there any tool to quickly fix these problems(gap less than 50um)?

For other problems,i can use tool to quickly fix the min seg/Arc length,uncoupled diff pair segs,accoding to select by polygon or select  by windows.




ng

How to avoid adding degassing holes to a particular shape

In a package design, designers often need to perform degassing. This is typically done at the end of the design process before sending the design to the manufacturer.

Degassing is a process where you perforate power planes, voltage planes, and filled shapes in your design. Degassing holes let the gas escape from beneath the metal during manufacturing of the substrate. The perforations or holes for degassing are generally small, having a specified size and shape, and are spaced regularly across the surface of the plane. If the degassing process is not done, it may result in the formation of gas bubbles under the metal, which may cause the surface of the metal to become uneven. After you degas the design, it is recommended to perform electrical verification.

Allegro X APD has degassing features that allow users to automate the process and place holes in the entire shape.

In today’s topic, we will talk about how to avoid adding  degassing holes on a particular shape.

Sometimes, a designer may need to avoid adding degassing holes to a particular shape on a layer. All other shapes on the layer can have degassing holes but not this shape. Using the Layer Based Degassing Parameters option, the designer can set the degassing parameters for all shapes on the layer. Now, the designer would like to defer adding degassing holes for this particular shape.

You may wonder if there is an easy way to achieve this. We will now see how this can be done with the tool.

Once the degassing parameters are set, performing Display > Element on any of the shapes on that layer will show the degassing parameters set.

You can apply the Degas_Not_Allowed property to a shape to specify that degassing should not be performed on this shape, even if the degassing requirements are met. Select the shape and add the property as shown below.

Switch to Shape Edit application mode (Setup > Application mode > Shape Edit) and window-select all shapes on the layer. Then, right-click and select Deferred Degassing > All Off.

Now, all shapes on the layer will have degassing holes except for the shape which has the Degas_Not_Allowed property attached to it.




ng

Allegro X APD - Tip of the week: Wondering how to set two adjacent layers as conductor layers! Then this post should help you.

By default, a dielectric must separate each pair of conductor layers in the cross-section of a design. In rare cases, this does not represent the real, manufactured substrate.

If your design requires you to have conductor layers that are not separated by a dielectric (such as, for half-etch designs), there is a variable that needs to be set in Allegro X APD. You must set this by enabling the variable icp_allow_adjacent_conductors. This entry, and its location in the User Preferences Editor, are shown in the following image.

The Objects on adjacent conductor layers do not electrically connect together, automatically. A via must be used to establish the inter-layer connections.

When enabling this option, it is recommended to exercise caution because excluding dielectric layers from your cross-section can lead to inaccurate calculations, including the calculations for signal integrity and via heights. It is important that your cross-section accurately reflect the finished product to ensure the most accurate results possible. Any dielectric layers present in the manufactured part need to be in the cross-section for accurate extraction, 3D viewing, and so on.

Let us know your comments on the various designs that would require adjacent conductor layers.




ng

slide hug only is wrong?

Hi,

Can you tell me which setting is causing this?

In the general edit. I try slide via to other position. but the slide is wrong.

in the cm,i set pad-pad connect is all allowed,and i turn off via to pad spacing in the same net spacing. only turn on via to via spacing in the same net spacing,set to via to via spacing =0.

default the via is closer to the pad edge, I think the correct location is show in the pic2.

 




ng

Creating Power and Ground rings in Allegro X Package Designer Plus

Power and Ground rings are exposed rings of metal surrounding a die that supply power/ground to the die and create a low-impedance path for the current flow. These rings ensure stable power distribution and reduce noise. Allegro X Package Designer Plus has a utility called Power/Ground Ring Generator which lets you define and place one or more shapes in the form of a ring around a die.

 To run the PWR/GND Generator Wizard, go to Route > Power/Ground Ring Generator or type "pring wizard" in the APD command window to invoke the Wizard.

   

This Wizard lets you define and place one or more shapes in the form of a ring around a die. The Power/Ground Ring Wizard creates up to 12 rings (shapes) at a time. If you require more rings, you can run the Power/Ground Ring Wizard as many times as needed. This command displays a wizard in which you can specify:

  • The number of rings to be generated
  • The creation of the first ring as a die flag (Die flag is the boundary of the die like the power ring.)
    • If you create a die flag and the first ring is the same net as the flag, you can enter a negative distance to overlap the ring and the die flag.
  • Multiple options for placement of the rings with respect to:
    • Origination point
    • Distance from the edge of the die
    • Distance from the nearest die pin on each die side
  • The reference designator of the die with which the rings will be used
  • The distance between rings
  • The width of each ring
  • The corner types on each ring (arc, chamfer, and right-angle)
  • An assigned net name for each ring
  • A label for each ring

The rings are basic in nature. For other shape geometries or split rings, choose Shape > Polygon or Shape > Compose/Decompose Shape from the menu in the design window.

Depending on the options selected, the Power/Ground Ring Wizard UI changes, representing how the rings will be created. Verify the Wizard settings to ensure that the rings are created as intended.

  1. When the Power/Ground Ring Wizard appears, set the number of rings to 2, accept the other defaults, and click Next. You can set Create first ring as die flag to create a basic die flag.

         2. Define Ring 1 and the net associated with it.

              a) Browse and choose Vss in the Net Names dialog box.

            b) Click OK.

            c) Specify the label as VSS.

            d) Click Next.

             The first ring should appear in your design. It is associated with the proper net; in this case, VSS.

  1. For the second ring, choose the net as Vdd and specify the label as VDD.
  2. Click Next.
  3. Click Finish in the Result Verification screen to complete the process.

The completed rings appear as shown below.

Now, when you click on Power and Ground Die Pin and add wirebonds, you will see that the wirebonds are placed directly on the Power and Ground rings.




ng

Maximizing Display Performance with Display Stream Compression (DSC)

Display Stream Compression (DSC) is a lossless or near-lossless image compression standard developed by the Video Electronics Standards Association (VESA) for reducing the bandwidth required to transmit high-resolution video and images. DSC compresses video streams in real-time, allowing for higher resolutions, refresh rates, and color depths while minimizing the data load on transmission interfaces such as DisplayPort, HDMI, and embedded display interfaces.

Why Is DSC Needed?

In the ever-evolving landscape of display technology, the pursuit of higher resolutions and better visual quality is relentless. As display capabilities advance, so do the challenges of managing the immense amounts of data required to drive these high-performance screens. This is where DSC steps in. DSC is designed to address the challenges of transmitting ultra-high-definition content without sacrificing quality or performance. As displays grow in resolution and capability, the amount of data they need to transmit increases exponentially. DSC addresses these issues by compressing video streams in real-time, significantly reducing the bandwidth needed while preserving image quality.
 

DSC Use in End-to-end System

DSC Key Features

  • Encoding tools:
    • Modified Median-Adaptive Prediction (MMAP)
    • Block Prediction (BP)
    • Midpoint Prediction (MPP)
    • Indexed color history (ICH)
    • Entropy coding using delta size unit-variable length coding (DSU-VLC)
  • The DSC bitstream and decoding process are designed to facilitate the decoding of 3 pixels/clock in practical hardware decoder implementations. Hardware encoder implementations are possible at 1 pixel/clock.
  • DSC uses an intra-frame, line-based coding algorithm, which results in very low latency for encoding and decoding.

DSC encoding algorithm
 

  • Compression can be done to a fractional bpp. The compressed bits per pixel ranges from 6 to 63.9375.
  • For validation/compliance certification of DSC compression and decompression engines, cyclic redundancy checks (CRCs) are used to verify the correctness of the bitstream and the reconstructed image.
  • DSC supports more color bit depths, including 8, 10, 12, 14, and 16 bpc.
  • DSC supports RGB and YCbCr input format, supporting 4:4:4, 4:2:2, and 4:2:0 sampling.
  • Maximum decompressor-supported bits/pixel values are as listed in the Maximum Allowed Bit Rate column in the table below

  • DP DSC Source device shall program the bit rate within the range of Minimum Allowed Bit Rate column in the table:

          


Summary

Display Stream Compression (DSC) is a technology used in DisplayPort to enable higher resolutions and refresh rates while maintaining high image quality. It works by compressing the video data transmitted from the source to the display, effectively reducing the bandwidth required. DSC uses a visually lossless algorithm, meaning that the compression is designed to be imperceptible to the human eye, preserving the fidelity of the image. This technology allows for smoother, more detailed visuals at higher resolutions, such as 4K or 8K, without requiring a significant increase in data bandwidth.

More Information

  • Cadence has a very mature Verification IP solution. Verification over many different configurations can be used with DisplayPort 2.1 and DisplayPort 1.4 designs, so you can choose the best version for your specific needs.
  • The DisplayPort VIP provides a full-stack solution for Sink and Source devices with a comprehensive coverage model, protocol checkers, and an extensive test suite.
  • More details are available on the DisplayPort Verification IP product page, Simulation VIP pages.
  • If you have any queries, feel free to contact us at talk_to_vip_expert@cadence.com




ng

Use Verisium SimAI to Accelerate Verification Closure with Big Compute Savings

Verisium SimAI App harnesses the power of machine learning technology with the Cadence Xcelium Logic Simulator - the ultimate breakthrough in accelerating verification closure. It builds models from regressions run in the Xcelium simulator, enabling the generation of new regressions with specific targets. The Verisium SimAI app also features cousin bug hunting, a unique capability that uses information from difficult-to-hit failures to expose cousin bugs. With these advanced machine learning techniques, Verisium SimAI offers the potential for a significant boost in productivity, promising an exciting future for our users.

Figure 1: Regression compression and coverage maximization with Verisium SimAI 

What can I do with Verisium SimAI?

You can exercise different use cases with Verisium SimAI as per your requirements. For some users, the goal might be regression compression and improving coverage regain. Coverage maximization and hitting new bins could be another goal. Other users may be interested in exposing hard-to-hit failures, bug hunting for difficult to find issues. Verisium SimAI allows users to take on any of these challenges to achieve the desired results.

Let's go into some more details of these use cases and scenarios where using SimAI can have a big positive impact.

  1. Using SimAI for Regression Compression and Coverage Regain

Unlock up to 10X compute savings with SimAI!

Verisium SimAI can be used to compress regressions and regain coverage. This flow involves setting up your regression environment for SimAI, running your random regressions with coverage and randomization data followed by training, and finally, synthesizing and running the SimAI-generated compressed regressions. The synthesized regression may prune tests that do not help meet the goal and add more runs for the most relevant tests, as well as add run-specific constraints. This flow can also be used to target specific areas like areas involving a high code churn or high complexity.

You can check out the details of this flow with illustrative examples in the following Rapid Adoption Kits (RAK) available on the Cadence Learning and Support Portal (Cadence customer credentials needed):

 

  1. Using SimAI for Coverage Maximization and Targeting coverage holes

Reduce your Functional Coverage Holes by up to 40% using SimAI!

Verisium SimAI can be used for iterative coverage maximization. This is most effective when regressions are largely saturated, and SimAI will explicitly try to hit uncovered bins, which may be hard-to-hit (but not impossible) coverage holes. This is achieved using iterative learning technology where with each iteration, SimAI does some exploration and determines how well it performed. This technique can also be used for bug hunting by using holes as targets of interest.

See more details on the Cadence Learning and Support Portal:

 

  1. Using SimAI for Bug Hunting

Discover and fix bugs faster using SimAI!

Verisium SimAI has a new bug hunting flow which can be used to target the goal of exposing hard-to-hit failure conditions. This is achieved using an iterative framework and by targeting failures or rare bins. The goal to target failures is best exercised when the overall failure rate is typically low (below 5%). Iterative learning can be used to improve the ability to target specific areas. Use the SimAI bug hunting use case to target rare events, low hit coverage bins, and low hit failure signatures.

See more details on the Cadence Learning and Support Portal:

Unlock compute savings, reduce your functional coverage holes, and discover and fix bugs faster with the power of machine learning technology now enabled by Verisium SimAI!

Please keep visiting  https://support.cadence.com/raks to download new RAKs as they become available.

Please note that you will need the Cadence customer credentials to log on to the Cadence Online  Support  https://support.cadence.com/, your 24/7 partner for getting help in resolving issues related to Cadence software or learning Cadence tools and technologies.

Happy Learning!




ng

Training Insights – Palladium Emulation Course for Beginner and Advanced Users

The Cadence Palladium Emulation Platform is a hardware system that implements the design, accelerating its execution and verification. Itoffers the highest performance and fastest bring-up times for pre-silicon validation of billion-gate designs, using a custom processor built by Cadence.

This Palladium Introduction course is based on the Palladium 23.03 ISR4 version and covers the following modules:

  • Introduction
  • Palladium flow
  • Running a design on the Palladium system

This course starts with an “Introduction” module that explains Palladium and other verification platforms to show its place in the big picture. It also compares Palladium with Protium and simulation and discusses its usage and limitations.

The “Palladium Flow” module includes two stages at a high level, which are Compile and Run. Then, it covers these stages in detail. First, it covers the ICE compile flow and IXCOM compile flow steps in detail. Then it explains Run, which is common for both ICE and IXCOM modes.

The third module, “Running Design on the Palladium System,” covers all the items required for running your design on the Palladium system, including:

  • Software stack requirements
  • Basic concepts required to understand the flow
  • Compute machine requirements

In addition, this course contains labs for both the ICE and IXCOM flows with detailed steps to exercise the features provided by the Palladium system. The lab explains a practical example of multiple counters and exercising their signals for force, monitor, and deposit features, along with frequency calculation using a real-time clock. The course is available on the Cadence support page:

There is also a Digital Badge available. You will find the Badge exam opportunity when you enroll in the Online training or after you have taken the training as "live" training.

For questions and inquiries, or issues with registration, reach out to us at Cadence Training. Want to stay up to date on webinars and courses? Subscribe to Cadence Training emails. To view our complete training offerings, visit the Cadence Training website.

Related Training Bytes

Related Courses

Related Blogs




ng

Jasper Formal Fundamentals 2403 Course for Starting Formal Verification

The course "Jasper Formal Fundamentals v24.03" introduces formal analysis to those who want to use formal analysis for design or verification. 

To optimally benefit from this course, you must already have sufficient knowledge of the System Verilog assertions to be capable of writing properties for formal verification. Hence, this training provides a module on formal analysis to help cover this essential background. 

In this course, you will learn how to code efficient SVA Properties for formal analysis, understand formal complexity and how to overcome it, and learn the basics of formal coverage.

After completing this course, you will be able to:

  • Define reusable, functionally correct SVA properties that are efficient for formal tools. These shall use abstract auxiliary code to simplify descriptions, make code maintenance easier, reduce debug time, and reduce tool-proof runtime.
  • Set up, run, and analyze results from formal analysis.
  • Identify designs upon which formal is likely to be successful while understanding formal complexity issues and how to identify and overcome them.
  • Use a systematic property development process to approach a completely new verification problem.
  • Understand the basics of formal coverage.

 The most recently updated release includes new modules on:

  • "Basic complexity handling" which discusses the complexity in formal and how to identify and handle them.
  • "Complexity reduction methods” which discusses the complexity reduction methods and which is suitable for which type of complexity problem.
  • “Coverage in formal” which discusses the basics of coverage in formal verification and how coverage can be used in formal.   

Take this course to learn the basics of formal verification. 

What's Next? 

You can check out the complete training: Jasper Formal Fundamentals. There is a free online version of the training available 24/7 for all customers with a Cadence Learning and Support Portal account. If you are interested in an instructor-led version of the training, please contact Cadence Training. And don't forget to obtain your digital badge after completing the training!

You can also check Jasper University page for more materials on formal analysis and Jasper apps. 

Related Trainings 

Jasper Formal Expert Training Course | Cadence

Verilog Language and Application Training Course | Cadence

SystemVerilog for Design and Verification Training Course | Cadence

SystemVerilog Assertions Training Course | Cadence

Related Training Bytes 

Jasper Formal Property Verification (FPV) App: Basic Usage Demo (Video)

Jasper Formal Methodology playlist

Related Training Blogs

It’s the Digital Era; Why Not Showcase Your Brand Through a Digital Badge!

Training Insights: Introducing the C++ Course for All Your C++ Learning Needs!

Training Insights: Reaching Your Verification Closure Using Verisium Manager

Training Insights - Free Online Courses on Cadence Learning and Support Portal




ng

Unveiling the Capabilities of Verisium Manager for Optimized Operations

In SoC development, the verification cycle is a crucial phase that ensures products meet their specifications and function correctly. However, the complexity of modern SoC projects, with their constant data flow, multiple validation teams working in parallel, and tight schedules, presents significant challenges. This article explores these challenges and introduces Verisium Manager as a solution that embodies the 'One Tool Fits All' concept. This means that Verisium Manager is designed to handle all aspects of the verification process for SoC development, from planning to coverage analysis to regression testing, thereby addressing the complex needs of SoC verification.

The Hurdles in Traditional Validation Cycles

 A typical validation process involves planning, coverage analysis, and regression testing. This complexity is compounded by using separate tools for each activity, leading to multiple control environments, APIs, and databases, not to mention the array of tool owners. Such fragmentation results in constant data transfer and translation between systems, from the planning tool to the coverage analysis tool and then to the regression testing tool. This continuous movement of data causes delays, system instability, poor user experiences, and, ultimately, a dip in the quality of the validation process.

The use of multiple platforms leads to inefficiency and reduced productivity. What's needed is a unified system that can streamline the workflow, simplify the verification process, and enhance its effectiveness.

Envisioning the Ideal Solution: Verisium Manager

 The cornerstone of an efficient validation cycle is integration and simplicity. The ideal solution is a singular platform that consolidates planning, coverage analysis, and regression management into one smooth, unified process. Verisium Manager emerges as this much-needed solution, encompassing all the functionalities necessary to streamline the validation process. Its comprehensive nature instills confidence in its ability to handle all aspects of the verification cycle. It can be fully customized to address and enforce any validation methodology and can facilitate smooth integration into any customer environment.

Features that stand out in Verisium Manager include: 

  • Unified Workflow: It acts as a single cockpit from which all activities are orchestrated, ensuring the validation teams' work is uninterrupted and seamlessly integrated.
  • Customization and Integration: Verisium Manager supports customizing test-plan structures and mapping results per project, ensuring a perfect fit for various project requirements. Its ability to smoothly integrate into the project's environment and compute platforms is unparalleled.
  • Support for Continuous Updates and Migration: The tool accommodates constant updates to project data and supports the migration of legacy data, ensuring that no historical data is lost in the transition to a new system.

Addressing Project-Specific Needs

 Verisium Manager recognizes diversity in different projects and offers project-specific solutions, including:

 Enforcing Project Test-Plan Structures and Attributes: It supports and enforces each project's unique test-plan structure and mapping guidelines.

  • Unified Data Views and Measurements: Verisium Manager promotes a unified view of data across all teams and enforces unified measurements, ensuring consistency and clarity in the validation process.
  • Enabling Project-Specific Actions and Integrations: The tool is designed to support project-specific actions directly from its graphical user interface and allows for smooth integration with in-house databases, dashboards, and the project execution stack.

Verisium Manager is the epitome of efficiency in software/hardware validation. Its differentiating features, such as support for customization, unified data view, and comprehensive coverage and regression requirements, make it an indispensable tool for any validation team looking to elevate their workflow.




ng

Deferrable Memory Write Usage and Verification Challenges

The application of real-time data processing or responsiveness is crucial, such as in high-performance computing, data centers, or applications requiring low-latency data transfers. It enables efficient use of PCIe bandwidth and resources by intelligently managing memory write operations based on system dynamics and workload priorities. By effectively leveraging Deferrable Memory Write [DMWr], Devices can achieve optimized performance and responsiveness, aligning with the evolving demands of modern computing applications.

What Is Deferrable Memory Write?

Deferrable Memory Write (DMWr) ECN introduced this new memory transaction type, which was later officially incorporated in PCIe 5.0 to CXL2.0. This enhanced type of memory transaction is Deferrable Memory Write [DMWr], which flows as another type of existing Read/Write memory transaction; the major difference of this Deferrable Memory Write, where the Requester attempts to write to a given location in Memory Space using the non-posted DMWr TLP Type, it Postponing their completion of memory write transactions to improve overall system efficiency and performance, those memory write operation can be delay or deferred until other priority task complete.

The Deferrable Memory Write (DMWr) requires the Completer to return an acknowledgment to the Requester and provides a mechanism for the recipient to defer (temporarily refuse to service) the Request.

DMWr provides a mechanism for Endpoints and hosts to choose to carry out or defer incoming DMWr Requests. This mechanism can be used by Endpoints and Hosts to simplify the design of flow control, reduce latency, and improve throughput. The Deferrable Memory writes TLP format in Figure A.

 

(Fig A) Deferrable Memory writes TLP format.

Example Scenario

Here's how the DMWr works with a simplified example: Imagine a system with an endpoint device (Device A) and a host CPU (Device B). Device B wants to write data to Device A's memory, but due to varying reasons such as system bus congestion or prioritization of other transactions, Device A can defer the completion of the memory write request. Just follow these steps:

  1. Initiation of Memory Write: Device B initiates a memory write transaction to Device A. This involves sending the memory write request along with the data payload over the PCIe physical layer link.
  2. Acknowledgment and Deferral: Upon receiving the memory write request, Device A acknowledges the transaction but may decide to defer its completion. Device A sends an acknowledgment (ACK) back to Device B, indicating it has received the data and intends to complete the write operation but not immediately.
  3. Deferred Completion: Device A defers the completion of the memory write operation to a later, more opportune time. This deferral allows Device A to prioritize other transactions or optimize the use of system resources, such as memory bandwidth or processor availability.
  4. Completion and Response: At a later point, Device A completes the deferred memory write operation and sends a completion indication back to Device B. This completion typically includes any status updates or additional information related to the transaction.

Usage or Importance of DMWr

Deferrable Memory Write usage provides the improvement in the following aspects:

  • Reduced Latency: By deferring less critical memory write operations, more critical transactions can be processed with lower latency, improving overall system responsiveness.
  • Improved Efficiency: Optimizes the utilization of system resources such as memory bandwidth and CPU cycles, enhancing the efficiency of data transfers within the PCIe architecture.
  • Enhanced Performance: Allows devices to manage and prioritize transactions dynamically, potentially increasing overall system throughput and reducing contention.

Challenges in the Implementation of DMWr Transactions

The implementation of deferrable memory writes (DMWr) introduces several advancements and challenges in terms of usage and verification:

  1. Timing and Synchronization: DMWr allows transactions to be deferred, complicating timing requirements or completing them within acceptable timing windows to avoid protocol violations. Ensuring proper synchronization between devices becomes critical to prevent data loss or corruption.
  2. Protocol Compliance: Verification must ensure compliance with ECN PCIe 6.0 and CXL specifications regarding when and how DMWr transactions can be initiated and completed.
  3. Performance Optimization: While DMWr can improve overall system performance by reducing latency, verifying its impact on system performance and ensuring it meets expected benchmarks is crucial.
  4. Error Handling: Handling errors related to deferred transactions adds complexity. Verifying error detection and recovery mechanisms under various scenarios (e.g., timeout during deferral) is essential.

Verification Challenges of DMWr Transactions

The challenges to verifying the DMWr transaction consist of all checks with respect to Function, Timing, Protocol compliance, improvement, Error scenario, and security usage on purpose, as well as Data integrity at the PCIe and CXL.

  1. Functional Verification: Verifying the correct implementation of DMWr at both ends of the PCIe link (transmitter and receiver) to ensure proper functionality and adherence to specifications.
  2. Timing Verification: Validating timing constraints associated with deferring writes and ensuring transactions are completed within specified windows without violating protocol rules.
  3. Protocol Compliance Verification: Checking that DMWr transactions adhere to PCIe and CXL protocol rules, including ordering rules and any restrictions on deferral based on the transaction type.
  4. Performance Verification: Assessing the impact of DMWr on overall system performance, including latency reduction and bandwidth utilization, through simulation and testing.
  5. Error Scenario Verification: Creating and testing scenarios to verify error handling mechanisms related to DMWr, such as timeouts, retries, and recovery procedures.
  6. Security Considerations: Assessing potential security vulnerabilities related to DMWr, such as data integrity risks during deferred transactions or exposure to timing-based attacks.

Major verification challenges and approaches are timing and synchronization verification in the context of implementing deferrable memory writes (DMWr), which is crucial due to the inherent complexities introduced by deferred transactions. Here are the key issues and approaches to address them:

Timing and Synchronization Issues

  1. Transaction Completion Timing:
    • Issue: Ensuring deferred transactions are completed within the specified time window without violating protocol timing constraints.
    • Approach: Design an internal timer and checker to model worst-case scenarios where transactions are deferred and verify that they are complete within allowable latency limits. This involves simulating various traffic loads and conditions to assess timing under different scenarios.
  2. Ordering and Dependencies:
    • Issue: Verifying that transactions deferred using DMWr maintain the correct ordering and dependencies relative to non-deferred transactions.
    • Approach: Implement test scenarios that include mixed traffic of DMWr and non-DMWr transactions. Verify through simulation or emulation that dependencies and ordering requirements are correctly maintained across the PCIe link.
  3. Interrupt Handling and Response Times:
    • Issue: Verify the handling of interrupts and ensure timely responses from devices involved in DMWr transactions.
    • Approach: Implement test cases that simulate interrupt generation during DMWr transactions. Measure and verify the response times to interrupts to ensure they meet system latency requirements.

In conclusion, while deferrable memory writes in PCIe and CXL offer significant performance benefits, their implementation and verification present several challenges related to timing, protocol compliance, performance optimization, and error handling. Addressing these challenges requires rigorous testing and testbench of traffic, advanced verification methodologies, and a thorough understanding of PCIe specifications and also the motivation behind introducing this Deferrable Write is effectively used in the CXL further. Outcomes of Deferrable Memory Write verify that the performance benefits of DMWr (reduced latency, improved throughput) are achieved without compromising timing integrity or violating protocol specifications.

In summary, PCIe and CXL are complex protocols with many verification challenges. You must understand many new Spec changes and consider the robust verification plan for the new features and backward compatible tests impacted by new features. Cadence's PCIe 6.0 Verification IP is fully compliant with the latest PCIe Express 6.0 specifications and provides an effective and efficient way to verify the components interfacing with the PCIe 6.0 interface. Cadence VIP for PCIe 6.0 provides exhaustive verification of PCIe-based IP and SoCs, and we are working with Early Adopter customers to speed up every verification stage.

More Information




ng

Training Webinar: Protium X2: Using Save/Restart for Debugging

Cadence Protium prototyping platforms rapidly bring up an SoC or system prototype and provide a pre-silicon platform for early software development, SoC verification, system validation, and hardware regressions. In this Training W ebinar, we will explore debugging using Save/Restart on Protium X2 . This feature saves execution time and lets you focus on actual debugging. The system state can be saved before the bug appears and restartS directly from there without spending time in initial execution. We’ll cover key concepts and applications, explore Save/Restart performance metrics, and provide examples to help you understand the concepts. Agenda: The key concepts of debugging using save/restart Capabilities, limitations, and performance metrics Some examples to enable and use save/restart on the Protium X2 system Date and Time Thursday, November 7, 2024 07:00 PST San Jose / 10:00 EST New York / 15:00 GMT London / 16:00 CET Munich / 17:00 IST Jerusalem / 20:30 IST Bangalore / 23:00 CST Beijing REGISTER To register for this webinar, sign in with your Cadence Support account (email ID and password) to log in to the Learning and Support System*. Then select Enrol to register for the session. Once registered, you’ll receive a confirmation email containing all login details. A quick reminder: If you haven’t received a registration confirmation within 1 hour of registering, please check your spam folder and ensure your pop-up blockers are off and cookies are enabled. For issues with registration or other inquiries, reach out to eur_training_webinars@cadence.com . Want to See More Webinars? You can find recordings of all past webinars here Like This Topic? Take this opportunity and register for the free online course related to this webinar topic: Protium Introduction Training The course includes slides with audio and downloadable lab exercises designed to emphasize the topics covered in the lecture. There is also a Digital Badge available for the training. Want to share this and other great Cadence learning opportunities with someone else? Tell them to subscribe . Hungry for Training? Choose the Cadence Training Menu that’s right for you. To view our complete training offerings, visit the Cadence Training website . Related Courses Protium Introduction Training Course | Cadence Palladium Introduction Training Course | Cadence Related Blogs Training Insights – A New Free Online Course on the Protium System for Beginner and Advanced Users Training Insights – Palladium Emulation Course for Beginner and Advanced Users Related Training Bytes Protium Flow Steps for Running Design on Protium System ICE and IXCOM mode comparison ICE compile flow IXCOM compile flow PATH settings for using Protium System Please see the course learning maps for a visual representation of courses and course relationships. Regional course catalogs may be viewed here




ng

BETA CAE Systems Is Now Cadence: Join Our 2024 China Open Meeting

This November, the engineering and simulation community is set to converge in China for an event that promises to be nothing short of revolutionary. The 2024 BETA CAE Systems China Open Meeting, taking place in the vibrant cities of Beijing and Shanghai on November 5 and 7 , respectively, is a must-attend for anyone looking to stay at the forefront of technological innovation in simulation solutions. Prepare to be inspired by Ben Gu , the visionary Corporate VP of Research and Development at Cadence. He will lead both meetings in Beijing and Shanghai with his keynote on " A New Millennium in Multiphysics System Analysis ." This thought-provoking keynote is expected to provide attendees with a glimpse into the future of engineering simulation and analysis. What sets the BETA CAE Systems Open Meetings apart is not just the high caliber of speakers but also the hands-on training sessions designed to enhance your technical expertise with the BETA CAE software suite. Whether you are an inexperienced individual seeking to acquire fundamental knowledge or an accomplished professional endeavoring to hone your expertise, these training sessions following the open meetings are meticulously tailored to meet your needs. Join Us at the BETA CAE Systems Open Meeting in Beijing The BETA CAE Systems Open Meeting in Beijing will feature a keynote speech by Peng Qiao , Senior Engineer at Great Wall Motors Co., Ltd, on Multidisciplinary Optimization Techniques for Automotive Control Arms . ( View detailed agenda for Beijing. ) When: November 5, 2024 Where: Grand Metropark Hotel Beijing If this sounds interesting, register today for the BETA CAE Systems Beijing Open Meeting by clicking the button below. Don't Miss Out on the BETA CAE Systems Open Meeting in Shanghai After the BETA CAE Systems Open Meeting in Beijing, the next meeting in China will be in Shanghai. During this event, Liu Deping, CAE Engineer from Zhejiang Geely Automobile Research Institute Co., Ltd, will deliver a keynote speech on the Application of ANSA in the Simulation Development Cycle . ( View detailed agenda for Shanghai. ) When: November 7, 2024 Where: InterContinental Shanghai Jing'an Following the open meeting on November 7 will be an exclusive training day on November 8. This session will provide attendees with practical experience using the BETA CAE software to improve their technical skills and provide hands-on knowledge of the software. If you find this intriguing, register now for the BETA CAE Systems Shanghai Open Meeting by clicking the button below. Why Attend? Gain firsthand insights into the latest developments in simulation technology Learn from real-world applications and success stories from various industries Connect and exchange ideas with experts in a collaborative environment Mark your calendars for this unparalleled opportunity to explore the forefront of simulation technology. Whether you're aiming to broaden your knowledge, enhance your technical skills, or connect with industry leaders, the BETA CAE Systems Open Meetings are your gateway to the future of engineering. Join us and be part of shaping the next wave of innovation in the simulation world.




ng

Wild River Collaborates with Cadence on CMP-70 Channel Modeling

Wild River Technology (WRT), the leading supplier of signal integrity measurement and optimization test fixtures for high-speed channels at data rates of up to 224G, has announced the availability of a new advanced channel modeling solution that helps achieve extreme signal integrity design to 70GHz. Read the press release. The CMP-70 program continues the industry-first simulation-to-measurement collaboration with Cadence that was initially established with the CMP-50. Significant resources were dedicated to the development of the CMP-70 by Cadence and WRT over almost three years. The CMP-70 will be on display at DesignCon 2025 , January 28-30, in Cadence booth 827 to benchmark the Cadence Clarity 3D Solver . “I am not a fan of hype-based programs that simply get attention,” remarked Alfred P. Neves, WRT’s co-founder and chief technical officer. “Both Cadence and Wild River brought substantial skills to the table in this project as we continued our industry-first simulation-to-measurement collaboration. The result is a proven, robust and accurate platform that brings extreme signal integrity to 70GHz designs. This application package has also been instrumental in demonstrating the robust 3D EM simulation capability of the Cadence Clarity solver.” “We’re delighted to continue the joint development and validation program with WRT that started with the CMP-50,” said Gary Lytle, product management director at Cadence. “The skilled and experienced signal integrity technologists that both companies bring to the program results in a superior signal integrity solution for our mutual customers.” CMP-70 Solution Features The solution is available both in a standard configuration and as a custom solution for customer-specific stackups and fabrication. The primary target application is to support a 3D EM solver analysis modeling versus the time- and frequency-domain measurement methodologies. The solution features include: The CMP-70 platform, assembled and 100% TDR NIST traceable tested, with custom stands Material Identification overview web-based meeting including anisotropic 3D material identification A cross-section PCB report and structures for using as-fabricated geometries Measured S-parameters, pre-tested for quality (passivity/causality and resampled for time domain simulations) A host of novel crosstalk structures suited for 112G HD level project analysis PCB layout design files (NDA required) An EDA starter library including loss models with industry-first accurate surface roughness models Comprehensive training available for 3D EM analysis – correspondence, material ID in X-Y and Z axis for a host of EDA tools Industry-First Hausdorff Technique The WRT application package also includes an industry-first modified Hausdorff (MHD) technique , included as MATLAB code. This algorithmic approach provides an accurate way to compare two sets of measurements in multi-dimensional space to determine how well they match. The technique is used to compare the results simulated by the Clarity solver with those measured on the CMP-70 platform. The methodology and initial results are shown in the figure below, where the figure of merit (FOM) is calculated from 10, 35, and finally to 50GHz. The MHD algorithm requires a MATLAB license, but WRT also accommodates customer data as another option, where WRT provides the comparison between measured and simulated data. Additional Resources If you are attending DesignCon 2025 , be sure to stop by Cadence booth 827 to see WRT’s CMP-70 advanced channel modeling solution in action with the Clarity 3D Solver. Check out our on-demand webinar, " Validating Clarity 3D Solver Accuracy Through Measurement Correlation ." Learn more about the CMP-70 solution and the Clarity 3D Solver . For more information about Cadence’s full suite of integrated multiphysics simulation solutions, download our Multiphysics System Analysis Solutions Portfolio .




ng

Ascent: Training Insights: DE-HDL Libraries in Allegro X System Capture

Allegro X System Capture offers a complete ecosystem for library development. This post introduces the latest DE-HDL Library Development using System Capture course in which you learn how to create different library objects. As a librarian, you often work with numerous libraries. Your tasks include creating or modifying symbols for libraries. To use Allegro X System Capture to create a library, you can follow the steps in the following flowchart: Let’s go through each step in detail. Setting the CDS_SITE Variable Before you start library development for a new project, set the CDS_SITE system environment variable. This step is required to access libraries and other configuration files. Creating a Project in Allegro X System Capture The next step is to create a project in Allegro X System Capture. Adding a Library to the Project Symbol development consists of creating symbol graphics, electrical data, and properties used by different tools in the PCB design flow. To add a library to a project, first create a library in the Libraries pane of the Project e xplorer. Creating Library Symbols The library development process supports the creation of various types of symbols. Creating a Symbol with Multiple Views You can generate multiple views of the same symbol using the Duplicate command. For example, a discrete symbol, such as a resistor, can have multiple views, as shown in the following image: Creating a Split Symbol For advanced designs, you often need to create library symbols and break them into multiple sections to support the design process. When a symbol shows all the logical pins in the physical package, it is called a single-section or flat symbol. Many large ICs have several pins and the symbols need to fit on a single schematic page. One workaround is to use vector pin names on a symbol to reduce its size, although manufacturers prefer schematics that show each pin. You can divide these high-pin count devices into smaller pieces, where each piece is a separate version of the part. Such parts are referred to as split parts or multi-section symbols. For multi-section symbols, you can create two types of split parts—symmetrical and asymmetrical. Symmetrical Split Symbols A symmetrical split symbol has only one symbol graphic, which holds two or more identical logic symbols, each with its own unique physical pin numbers. You can create a symmetrical split symbol using the Duplicate Section icon in the canvas window. Each symbol section contains the same set of pins but different pin numbers, as shown in the following image: Asymmetrical Split Symbols An asymmetrical split symbol is a symbol whose physical package contains one or more unique schematic symbols. You can create an asymmetrical split symbol by clicking the New Section icon in the canvas window. Asymmetrical symbols have a unique set of logical pins, as shown in the following image: Creating Symbols Using the Spreadsheet Interface To simplify the development of large symbols, Allegro X System Capture has a Spreadsheet Interface . You can copy from a spreadsheet into the interface. This saves time and helps minimize errors introduced by manual entry. In conclusion, the DE-HDL library development using Allegro X System Capture course involves several critical steps and supports various symbol creation techniques. This course helps librarians create and modify symbols effortlessly and deepens their understanding of library development within Allegro X System Capture. To learn more about this topic, enroll in the DE-HDL Library Development using Allegro X System Capture course on the Cadence Support portal . Click the training byte link now or visit Cadence Support and search for training bytes under Video Library. If you find the post useful and want to delve deeper into training details, enroll in the following online training course for lab instructions and a downloadable design: DE-HDL Library Development using Allegro X System Capture (Online). You can become Cadence Certified once you complete the course. Cadence Training Services now offers free Digital Badges for all popular online training courses. These badges indicate proficiency in a certain technology or skill and give you a way to validate your expertise to managers and potential employers. You can add the digital badge to your email signature or any social media channels, such as Facebook or LinkedIn, to highlight your expertise. To find out more, see the blog post Take a Cadence Masterclass and Get a Badge . You might also be interested in the training Learning Map that guides you through recommended course flows as well as tool experience and knowledge-level training modules. To find information on how to get an account on the Cadence Learning and Support portal, see here . SUBSCRIBE to the Cadence training newsletter to be updated about upcoming training, webinars, and much more. If you have any questions about courses, schedules, online training, blended/virtual live training, or public, or onsite live training, reach out to us at Cadence Training .




ng

Training Webinar: Fast Track RTL Debug with the Verisium Debug Python App Store

As a verification engineer, you’re surely looking for ways to automate the debugging process. Have you developed your own scripts to ease specific debugging steps that tools don’t offer? Working with scripts locally and manually is challenging—so is reusing and organizing them. What if there was a way to create your own app with the required functionality and register it with the tool? The answer to that question is “Yes!” The Verisium Debug Python App Store lets you instantly add additional features and capabilities to your Verisium Debug Application using Python Apps that interact with Verisium Debug via the Python API. Join me, Principal Education Application Engineer Bhairava Prasad, for this Training Webinar and discover the Verisium Debug Python App Store. The app store allows you to search for existing apps, learn about them, install or uninstall them, and even customize existing apps. Date and Time Wednesday, November 20, 2024 07:00 PST San Jose / 10:00 EST New York / 15:00 GMT London / 16:00 CET Munich / 17:00 IST Jerusalem / 20:30 IST Bangalore / 23:00 CST Beijing REGISTER To register for this webinar, sign in with your Cadence Support account (email ID and password) to log in to the Learning and Support System*. Then select Enroll to register for the session. Once registered, you’ll receive a confirmation email containing all login details. A quick reminder: If you haven’t received a registration confirmation within one hour of registering, please check your spam folder and ensure your pop-up blockers are off and cookies are enabled. For issues with registration or other inquiries, reach out to eur_training_webinars@cadence.com . Like this topic? Take this opportunity and register for the free online course related to this webinar topic: Verisium Debug Training To view our complete training offerings, visit the Cadence Training website Want to share this and other great Cadence learning opportunities with someone else? Tell them to subscribe . Hungry for Training? Choose the Cadence Training Menu that’s right for you. Related Courses Xcelium Simulator Training Course | Cadence Related Blogs Unveiling the Capabilities of Verisium Manager for Optimized Operations - Verification - Cadence Blogs - Cadence Community Verisium SimAI: SoC Verification with Unprecedented Coverage Maximization - Corporate News - Cadence Blogs - Cadence Community Verisium SimAI: Maximizing Coverage, Minimizing Bugs, Unlocking Peak Throughput - Verification - Cadence Blogs - Cadence Community Related Training Bytes Introducing Verisium Debug (Video) (cadence.com) Introduction to UVM Debug of Verisium Debug (Video) (cadence.com) Verisium Debug Customized Apps with Python API Please see course learning maps a visual representation of courses and course relationships. Regional course catalogs may be viewed here . *If you don’t have a Cadence Support account, go to Cadence User Registration and complete the requested information. Or visit Registration Help .




ng

Redefining Hearing Aids with Cadence DSPs

Hearing is one of the most essential senses for engaging with the world. It enables us to converse, appreciate music, and remain alert to our surroundings. Hearing loss is a prevalent issue affecting millions of individuals globally and disconnecting them from a world where sound is vital to others and the environment. The World Health Organization (WHO) reports that over 5% of the global population requires hearing rehabilitation, a striking statistic highlighting this issue's pervasive nature. Technology has transformed audiology, evolving from simple ear trumpets to sophisticated modern hearing aids. This advancement began with the invention of the transistor, paving the way for devices that are fully wearable inside or behind the ear. Although hearing aids have been available for many years, historically, access to these critical devices has been insufficient, resulting in numerous individuals lacking the necessary support. However, recent advances in hearing aid technology promise improved acoustic experiences, employing modern techniques like binaural processing and neural networks. These innovations demand sophisticated architecture to balance high memory needs with low power consumption in a user-friendly design. Cadence is at the forefront of this technological evolution, offering tools and IP solutions that enhance the accessibility, efficiency, and impact of hearing aids, paving the way for a more inclusive future. This blog explores how Cadence's advanced DSPs are transforming hearing aid design and making them more accessible, efficient, and impactful. Hearing Aids: A Testament to Human Ingenuity The transition from analo g to digital technology in the late 20th century further transformed hearing aids, offering superior sound quality, customization, and the ability to connect to various electronic devices, thus enhancing the user experience markedly. Today's hearing aids are highly effective, versatile, and nearly invisible, a significant advancement from early attempts to address hearing loss. They also feature advanced noise cancellation and connectivity options, allowing users to integrate seamlessly into the digital world. This progression not only highlights the industry's commitment to improving user experience and accessibility but also offers a glimpse into a future where hearing loss is no longer a barrier. Challenges Despite advancements and sophistication, there are several challenges related to hearing aid design and adoption. Users demand smaller, more discreet devices that don't sacrifice performance. While the shift towards sleeker designs is aesthetically pleasing, it introduces substantial complexities in product design. Designers face the challenges of integrating essential components, such as batteries and peripherals, into increasingly compact spaces. Power consumption remains a critical concern, as these devices must remain operational throughout the day. Leveraging neural networks to enhance the signal-to-noise ratio (SNR) for better quality demands additional memory capacity. Consequently, there is a pressing need for flexible, low-power architectures that incorporate all necessary memory and peripherals without compromising the device’s compact size. Adopting AI for adjusting hearing aid volume to fit an individual's specific auditory requirements is a significant challenge and demands more memory and effort. Besides this, reliability and cost are significant challenges for manufacturers. Cadence's Role in Transforming Hearing Aids In hearing aid development, the capacity to evaluate the energy efficiency of SoCs across different frequencies in real time is crucial. These applications demand cohesive, energy-efficient solutions that can uphold high performance. The Cadence Tensilica HiFi and Fusion F1 DSP family emphasize minimal power usage while providing robust performance, ideally suited for a wide range of audio and voice applications. The Cadence Tensilica HiFi DSP family, a high-performance audio technology with AI acceleration and advanced DSP capability, offers feature-rich audio, speech, and imaging for wearables, automotive, home entertainment, digital assistants, and ASR. The Tensilica HiFi DSP family accelerates innovation with its comprehensive instruction set and supports fixed- and floating-point data types. Simplifying software development, it offers C/C++ programming, an auto-vectorizing compiler, and a rich DSP software library through the Cadence Tensilica Xplorer development environment. With the flexibility to customize and enhance performance through additional instructions and better I/O bandwidth, the Tensilica HiFi and Fusion DSP families offer a robust, low-energy audio solution compatible across an expansive software ecosystem for various applications and devices. Conclusion Technological advancements are driving hearing aid evolution; the future of hearing aids lies in further miniaturization and functionality enhancement. Cadence's ongoing innovations aim to improve signal processing and noise reduction, even in challenging environments. The integration of neural networks promises more apparent sound transmission and greater adaptability. Cadence is working on improving how these devices process signals and reduce noise and has initiated a collaborative venture with distinguished entities like GlobalFoundries (GF), Hoerzentrum Oldenburg gGmbH, and Leibniz University Hannover. This collaboration has borne fruit in the form of the industry's first binaural hearing aid system-on-chip (SoC) prototype, the Smart Hearing Aid Processor ( SmartHeAP ). Learn More Cadence, GlobalFoundries, Hoerzentrum Oldenburg and Leibniz University Hannover Collaborate to Advance Hearing Aid Technology Cadence Extends Battery Life and Improves User Experience for Next-Generation Hearables, Wearables and Always-On Devices Advancing the Future of Hearing Aids with Cadence Bluetooth LE Audio, Hearing Aids, and Mindtree




ng

McLaren and Cadence Are Engineering Success

Celebrated for their unparalleled engineering expertise and pioneering mindset, McLaren stands at the forefront of innovation. Theirs is a story of engineering excellence, a symphony of speed driven by the relentless pursuit of aerodynamic perfection. In 2022, Cadence was named an Official Technology Partner of the McLaren Formula 1 Team. The multi-year partnership between McLaren and Cadence has helped redefine the boundaries of what’s possible in Formula 1 aerodynamics. Shaving off a fraction of a second per lap can make all the difference in a podium finish, and track conditions bring layers of complexity to the design process. That’s where Cadence steps in with Fidelity CFD Software. The Cadence Fidelity CFD software is a comprehensive suite of computational fluid dynamics (CFD) solutions. Access to this solution allows the McLaren F1 team to accelerate their CFD workflow, enabling them to assess designs faster and more precisely. It also allows them to investigate airflows and tackle design projects that require advanced compute power and precision. With Fidelity Flow’s solver capabilities and Python-driven automation, Cadence’s CFD software aids the advancement of aerodynamic simulations that go into McLaren’s F1 cars. With a customized, high-quality, multi-block meshing strategy and optimized workflow, Fidelity CFD makes design exploration more automated, thereby helping establish a strong foundation for McLaren’s future success on the track. Lando Norris, F1 driver for McLaren, said, “As a driver, I saw the impact of every decision made in the design room in every simulation run. The work on aerodynamics directly translates to the confidence I have on track, the grip in every turn, and the speed on every straight. This partnership, this technology, is what will give us the edge. It's not just about battling opponents; it's about mastering the airflow around the car in every driving condition on every track.” If you’re interested in learning more about the importance of CFD in McLaren’s racing success, be sure to attend our upcoming webinar, “CFD and Experimental Aerodynamics in McLaren F1 Engineering.” Christian Schramm, McLaren’s director of advanced projects, and Cadence’s Benjamin Leroy will be the main speakers for the event. Register today to secure your spot! For more insights on the Formula 1 car design process, take a look at the case study, “ McLaren Formula 1 Car Aerodynamics Simulation with Cadence Fidelity CFD Software .” Learn more about how McLaren and Cadence are engineering success . “Designed with Cadence” is a series of videos that showcases creative products and technologies that are accelerating industry innovation using Cadence tools and solutions. For more Designed with Cadence videos, check out the Cadence website and YouTube channel .




ng

Simulating Multiple Cadence DSPs as Multiple x86 Processes

An increasing number of embedded designs are multi-core systems. At the pre-silicon stage, customers use a simulation platform for architectural exploration and software development. Architects want to quantify the impact of the number of cores, local memory size, system memory latency, and interconnect bandwidth. Software teams wish to have a practical development platform that is not excruciatingly slow. This blog shares a recipe for simulating Cadence DSPs in a multi-core design as separate x86 processes. The purpose is to reduce simulation time for customers with simple multi-core models where cores interact only through shared memory. It uses a Vision Q8 multi-core design to share details of the XTSC (Xtensa SystemC) model, software application, commands, and debugging. Note the details shared are for a simulation run on an Ubuntu Linux machine, Xtensa tools version RI-2023.11, and core configuration XRC_Vision_Q8_AODP. Complex vs. Simple Model A complex model (Figure 1) is one in which one core accesses another core's local memory, or there are inter-core interrupts. Simulation runs as a single x86 process. Figure 1 A simple model (Figure 2) is one in which cores interact only through shared memory. Shared memory is a file on the Linux host. Figure 2 Multiple x86 Process – Simple Model As depicted in Figure 3, each core is simulated using a separate x86 process. Cores use barriers and locks placed in shared memory for synchronization and data sharing. Locks are placed in un-cached memory that support exclusive subordinate access. The XTSC memory component, xtsc_memory , supports exclusive subordinate access. Cadence software tools provide a way to define memory regions as cached or uncached. For more details, please refer to Cadence's Linker Support Packages (LSP) Reference Manual for Xtensa SDK . Figure 3 Demo Application A demo application performs a 128x128 matrix multiplication. Work is divided so that each of the 32 cores computes four rows of the 128x128 result matrix. Cores use barriers to synchronize. Cadence tools provide APIs for synchronization and locking. Please refer to Cadence's System Software Reference Manual for more details. Note without a higher-level lock, prints from all cores will get mixed up. Therefore, in the demo application, only core#0 prints. SystemC Simulation The following sample command runs the 32-core simulation in such a way that each core is a separate x86 process. It runs a matrix multiplication application in cycle-accurate mode with logging off. >>for (( N=0; N >xtsc-run -define=NumCores=32 -define=N=0 -define=LOGGING=0 -define=TURBO=0 --xxdebug=sync -i=coreNN.inc -sc_main=sc_main.cpp -no_sim Modify the sc_main.cpp generated for core#0 to create a generic sc_main.cpp to build a single simulation executable for all cores. The Xtensa SDK includes Makefile targets to build custom simulations. By default, the simulation runs in cycle-accurate mode. Fast functional (Turbo) mode provides additional improvement over cycle-accurate mode. Note that the fast functional mode has an initialization phase, so gains are visible only when running an application with longer run times. Simulation Wall Time The table captures simulation wall time improvements. Note that these are illustrative wall time numbers. Actual wall time numbers and improvements will depend on your host machine's performance and your application. Simulation Type Wall Time Comments Single process cycle accurate mode 17500 seconds Multiple x86 processes cycle accurate mode 1385 seconds 12X faster than single process Multiple x86 processes turbo mode 415 seconds 3X faster than cycle accurate mode Debugging Attaching a debugger to each of the individual x86 core simulation processes is possible. Synchronous stop/resume and core-specific breakpoints are also supported. Configure the Xplorer launch configuration and attach it to the running simulation processes as follows (Figure 5) Figure 5 Figure 6 shows 32 debug contexts. Figure 6 As shown, using Xtensa SDK, you can create a multi-core simulation that functions as a practical software development platform. Please visit the Cadence support site for information on building and simulating multi-core Xtensa systems.




ng

Celebrating Milestones: The Cadence Bangalore Toastmasters Club’s Journey

On November 5, 2024, the Cadence Bangalore Toastmasters Club celebrated a significant milestone by hosting its 50th meeting. Established in December 2020, the club was created to provide a supportive environment for individuals looking to improve their communication and leadership skills. Over the years, the club has evolved into a vibrant community filled with success stories of personal development and newfound confidence. A testament to the club's dedication is its achievement of the "Select Distinguished Club" status during the 2023-2024 program year. By fulfilling 7 out of 10 distinguished goals, the club highlighted its commitment to excellence—a success driven by its vibrant members' relentless focus and perseverance. The strategic insight gained from regular Toastmasters committee meetings and the influential "Moments of Truth" sessions held in 2023 and 2024 are key to this success. Our club members have consistently demonstrated strong performance in various speech contests, with notable achievements across multiple levels. In 2023, members excelled in Evaluation and Table Topics contests, reaching the district level while advancing to the Division Level in the International Speech Contest. Continuing their success into 2024, members again qualified for area-level contests, securing third-place positions in the Evaluation and Table Topics categories, highlighting the club's dedication and competitive spirit. The 50th meeting was based on the theme of serendipity. It was not only a milestone celebration but also a vibrant festival of achievements and growth. The day buzzed with energy as activities like a spirited Treasure Hunt injected enthusiasm and camaraderie among attendees. Distinguished guests, including Kripa Venkitachalam and Madhavi Rao, enriched the occasion with inspiring speeches. Madhavi reignited the club's spirit, while Kripa's discourse on the Growth Mindset and the "Power of Yet" encouraged members to pursue continuous self-improvement. The Cadence Bangalore Toastmasters Club is enthusiastic about its promising future and is committed to creating an environment that promotes personal and professional growth. Many members are close to completing their Toastmasters levels and pathways, and this term, a new group of approximately 30 individuals has joined, bringing the total membership to 52. This vibrant community is just beginning its journey and is eager to reach new milestones together through mutual support and a shared commitment to excellence. The transformations experienced by many club members are truly compelling. They often share how the club has significantly improved their communication skills and boosted their confidence. One member recalls, "Before joining, I found public speaking intimidating. Now, I embrace every opportunity to share my ideas." Another member highlights how the club's supportive environment helped him overcome his fear of public speaking, propelling his career to new heights. This culture of constructive feedback and continuous improvement has inspired countless members to pursue their dreams with renewed determination and optimism. The Cadence Bangalore Toastmasters Club's journey is a living testament to the power of community and the potential within each of us to grow and achieve greatness. As the club continues to evolve and inspire, it serves as a beacon for those aspiring to transform their skills and seize their moment in the spotlight. Learn more about life at Cadence.




ng

Randomization considerations for PCIe Integrity and Data Encryption Verification Challenges

Peripheral Component Interconnect Express (PCIe) is a high-speed interface standard widely used for connecting processors, memory, and peripherals. With the increasing reliance on PCIe to handle sensitive data and critical high-speed data transfer, ensuring data integrity and encryption during verification is the most essential goal. As we know, in the field of verification, randomization is a key technique that drives robust PCIe verification. It introduces unpredictability to simulate real-world conditions and uncover hidden bugs from the design. This blog examines the significance of randomization in PCIe IDE verification, focusing on how it ensures data integrity and encryption reliability, while also highlighting the unique challenges it presents. For more relevant details and understanding on PCIe IDE you can refer to Introducing PCIe's Integrity and Data Encryption Feature . The Importance of Data Integrity and Data Encryption in PCIe Devices Data Integrity : Ensures that the transmitted data arrives unchanged from source to destination. Even minor corruption in data packets can compromise system reliability, making integrity a critical aspect of PCIe verification. Data Encryption : Protects sensitive data from unauthorized access during transmission. Encryption in PCIe follows a standard to secure information while operating at high speeds. Maintaining both data integrity and data encryption at PCIe’s high-speed data transfer rate of 64GT/s in PCIe 6.0 and 128GT/s in PCIe 7.0 is essential for all end point devices. However, validating these mechanisms requires comprehensive testing and verification methodologies, which is where randomization plays a very crucial role. You can refer to Why IDE Security Technology for PCIe and CXL? for more details on this. Randomization in PCIe Verification Randomization refers to the generation of test scenarios with unpredictable inputs and conditions to expose corner cases. In PCIe verification, this technique helps us to ensure that all possible behaviors are tested, including rare or unexpected situations that could cause data corruption or encryption failures that may cause serious hindrances later. So, for PCIe IDE verification, we are considering the randomization that helps us verify behavior more efficiently. Randomization for Data Integrity Verification Here are some approaches of randomized verifications that mimic real-world traffic conditions, uncovering subtle integrity issues that might not surface in normal verification methods. 1. Randomized Packet Injection: This technique randomized data packets and injected into the communication stream between devices. Here we Inject random, malformed, or out-of-sequence packets into the PCIe link and mix valid and invalid IDE-encrypted packets to check the system’s ability to detect and reject unauthorized or invalid packets. Checking if encryption/decryption occurs correctly across packets. On verifying, we check if the system logs proper errors or alerts when encountering invalid packets. It ensures coverage of different data paths and robust protocol check. This technique helps assess the resilience of the IDE feature in PCIe in below terms: (i) Data corruption: Detecting if the system can maintain data integrity. (ii) Encryption failures: Testing the robustness of the encryption under random data injection. (iii) Packet ordering errors: Ensuring reordering does not affect data delivery. 2. Random Errors and Fault Injection: It involves simulating random bit flips, PCRC errors, or protocol violations to help validate the robustness of error detection and correction mechanisms of PCIe. These techniques help assess how well the PCIe IDE implementation: (i) Detects and responds to unexpected errors. (ii) Maintains secure communication under stress. (iii) Follows the PCIe error recovery and reporting mechanisms (AER – Advanced Error Reporting). (iv) Ensures encryption and decryption states stay synchronized across endpoints. 3. Traffic Pattern Randomization: Randomizing the sequence, size, and timing of data packets helps test how the device maintains data integrity under heavy, unpredictable traffic loads. Randomization for Data Encryption Verification Encryption adds complexity to verification, as encrypted data streams are not readable for traditional checks. Randomization becomes essential to test how encryption behaves under different scenarios. Randomization in data encryption verification ensures that vulnerabilities, such as key reuse or predictable patterns, are identified and mitigated. 1. Random Encryption Keys and Payloads: Randomly varying keys and payloads help validate the correctness of encryption without hardcoding assumptions. This ensures that encryption logic behaves correctly across all possible inputs. 2. Randomized Initialization Vectors (IVs): Many encryption protocols require a unique IV for each transaction. Randomized IVs ensure that encryption does not repeat patterns. To understand the IDE Key management flow, we can follow the below diagram that illustrates a detailed example key programming flow using the IDE_KM protocol. Figure 1: IDE_KM Example As Figure 1 shows, the functionality of the IDE_KM protocol involves Start of IDE_KM Session, Device Capability Discovery, Key Request from the Host, Key Programming to PCIe Device, and Key Acknowledgment. First, the Host starts the IDE_KM session by detecting the presence of the PCIe devices; if the device supports the IDE protocol, the system continues with the key programming process. Then a query occurs to discover the device’s encryption capabilities; it ensures whether the device supports dynamic key updates or static keys. Then the host sends a request to the Key Management Entity to obtain a key suitable for the devices. Once the key is obtained, the host programs the key into the IDE Controller on the PCIe endpoint. Both the host and the device now share the same key to encrypt and authenticate traffic. The device acknowledges that it has received and successfully installed the encryption key and the acknowledgment message is sent back to the host. Once both the host and the PCIe endpoint are configured with the key, a secure communication channel is established. From this point, all data transmitted over the PCIe link is encrypted to maintain confidentiality and integrity. IDE_KM plays a crucial role in distributing keys in a secure manner and maintaining encryption and integrity for PCIe transactions. This key programming flow ensures that a secure communication channel is established between the host and the PCIe device. Hence, the Randomized key approach ensures that the encryption does not repeat patterns. 3. Randomization PHE: Partial Header Encryption (PHE) is an additional mechanism added to Integrity and Data Encryption (IDE) in PCIe 6.0. PHE validation using a variety of traffic; incorporating randomization in APIs provided for validating PHE feature can add more robust Encryption to the data. Partial Header Encryption in Integrity and Data Encryption for PCIe has more detailed information on this. Figure 2: High-Level Flow for Partial Header Encryption 4. Randomization on IDE Address Association Register values: IDE Address Association Register 1/2/3 are supposed to be configured considering the memory address range of IDE partner ports. The fields of IDE address registers are split multiple values such as Memory Base Lower, Memory Limit Lower, Memory Base Upper, and Memory Limit Upper. IDE implementation can have multiple register blocks considering addresses with 32 or 64, different registers sizes, 0-255 selective streams, 0-15 address blocks, etc. This Randomization verification can help verify all the corner cases. Please refer to Figure 2. Figure 3: IDE Address Association Register 5. Random Faults During Encryption: Injecting random faults (e.g., dropped packets or timing mismatches) ensures the system can handle disruptions and prevent data leakage. Challenges of IDE Randomization and its Solution Randomization introduces a vast number of scenarios, making it computationally intensive to simulate every possibility. Constrained randomization limits random inputs to valid ranges while still covering edge cases. Again, using coverage-driven verification to ensure critical scenarios are tested without excessive redundancy. Verifying encrypted data with random inputs increases complexity. Encryption masks data, making it hard to verify outputs without compromising security. Here we can implement various IDE checks on the IDE callback to analyze encrypted traffic without decrypting it. Randomization can trigger unexpected failures, which are often difficult to reproduce. By using seed-based randomization, a specific seed generates a repeatable random sequence. This helps in reproducing and analyzing the behavior more precisely. Conclusion Randomization is a powerful technique in PCIe verification, ensuring robust validation of both data integrity and data encryption. It helps us to uncover subtle bugs and edge cases that a non-randomized testing might miss. In Cadence PCIe VIP, we support full-fledged IDE Verification with rigorous randomized verification that ensures data integrity. Robust and reliable encryption mechanisms ensure secure and efficient data communication. However, randomization also brings various challenges, and to overcome them we adopt a combination of constrained randomization, seed-based testing, and coverage-driven verification. As PCIe continues to evolve with higher speeds and focuses on high security demands, our Cadence PCIe VIP ensures it is in line with industry demand and verify high-performance systems that safeguard data in real-world environments with excellence. For more information, you can refer to Verification of Integrity and Data Encryption(IDE) for PCIe Devices and Industry's First Adopted VIP for PCIe 7.0 . More Information: For more info on how Cadence PCIe Verification IP and TripleCheck VIP enables users to confidently verify IDE, see our VIP for PCI Express , VIP for Compute Express Link for and TripleCheck for PCI Express For more information on PCIe in general, and on the various PCI standards, see the PCI-SIG website .




ng

Using troubles about LT4417

Hello~

As the following circuit shows, VCC+5V_USB is the 4th power source, connecting the output of power management of diode.There are 3 5V input in the input port of LTC4417.

It’s normal when VCC+5V_USB prodive power with other circuit. However, if I cup VCC+5V_FIRST,VCC+5V_SECOND,VCC+5V_THIRD, 5V voltage will occurred in the VCC+5V_FIRST,VCC+5V_SECOND,VCC+5V_THIRD.

The LTC4417 PDF

 

Is this phenomance normal ?

Please kindly give me some advice ! Thanks.




ng

Replace Cache useing TCL command

Hello,

I'm using OrCad 17.2 and in the company I'm wokring at there was a change in the database folder (from driver F to G for example) and it effects the option of synchronise using the Part Manager. and changing manually each part in the Desgin Cahce can be a pain.

Is there any way I can make a TCL script that will run and replace a part cahce with other? Better if I can call from a table to read, and write from other collum.

I would really be happy for an example.

Thanks for the help.




ng

Using oscillograph waveform file CSV as the Pspice simulation signal source

hi,

     I save the waveform file of the oscilloscope as CSV file format.

     Now, I need to use this waveform file as the source of the low-pass filter .

     I searched and read the PSPICE help documents, and did not find any  methods. 

     How to realize it?

     Are there any reference documents or examples?

     Thanks!

    




ng

Path mapping for C Firmware source files when debugging

Hi,

i am compiling firmware under Windows transfer the binaries and the sources to Linux to simulate/debug there. The problem is that the paths in the DWARF debug info of the .elf file are the absolute Windows paths as set by the compiler so they are useless under Linux. Is it possible to configure mappings of these paths to the Linux paths when simulating/debugging like with e.g. GDB (https://sourceware.org/gdb/current/onlinedocs/gdb/Source-Path.html#index-set-substitute_002dpath)?

thx,

Peter




ng

Matlab cannot open Pspice, to prompt orCEFSimpleUI.exe that it has stopped working!

Cadence_SPB_17.4-2019 + Matlab R2019a

请参考本文档中的步骤进行操作

1,打开BJT_AMP.opj

2,设置Matlab路径

3,打开BJT_AMP_SLPS.slx

4,打开后,设置PSpiceBlock,出现或CEFSimpleUI.exe停止工作

5,添加模块

6,相同

7,打开pspsim.slx

8,相同

9,打开C: Cadence Cadence_SPB_17.4-2019 tools bin

orCEFSimpleUI.exe和orCEFSimple.exe

 

10,相同

我想问一下如何解决,非常感谢!




ng

How to turn vavlog IO width mismatch error to warning?

Hi, all.

When I use vavlog to compile verilog rtl, it will recognize IO width mismatch problem as a fatal error.

How to turn the error into warning?

VCS can use -error=noIOPCWM to ingore the error.

Is vavlog has similar arguments?




ng

The code used to Replace Cache useing TCL command

use the DBO function DboLib_RepalceCache to do the job of "Replace cache" 

in order to easy the job ,  type the code below . the code is a wrapper of the function metioned above

set lStatus [DboState]
set lSession $::DboSession_s_pDboSession
DboSession -this $lSession
set lDesignsIter [$lSession NewDesignsIter $lStatus]
set lDesign [$lDesignsIter NextDesign $lStatus]
set lNullObj NULL

set oldLibName [DboTclHelper_sMakeCString "E:\PROJECT_WORKLIB.OLB"]
set newLibName [DboTclHelper_sMakeCString "E:\MCU_PARTS_LIB.OLB"]

#DboLib_ReplaceCache wrapper
proc ReplaceCacheByName {partName} {
    global oldLibName
    global newLibName
    global lDesign
    set lPartStr [DboTclHelper_sMakeCString $partName]
    #set lNewStr [DboTclHelper_sMakeCString $newName]
    $lDesign ReplaceCache $lPartStr $oldLibName $lPartStr $newLibName 0 1
}

then use the tcl command like below to do the real job :

ReplaceCacheByName "CL10B104KB8NNNC_C12"




ng

To Escalate or Not? This Is Modi’s Zugzwang Moment

This is the 17th installment of The Rationalist, my column for the Times of India.

One of my favourite English words comes from chess. If it is your turn to move, but any move you make makes your position worse, you are in ‘Zugzwang’. Narendra Modi was in zugzwang after the Pulwama attacks a few days ago—as any Indian prime minister in his place would have been.

An Indian PM, after an attack for which Pakistan is held responsible, has only unsavoury choices in front of him. He is pulled in two opposite directions. One, strategy dictates that he must not escalate. Two, politics dictates that he must.

Let’s unpack that. First, consider the strategic imperatives. Ever since both India and Pakistan became nuclear powers, a conventional war has become next to impossible because of the threat of a nuclear war. If India escalates beyond a point, Pakistan might bring their nuclear weapons into play. Even a limited nuclear war could cause millions of casualties and devastate our economy. Thus, no matter what the provocation, India needs to calibrate its response so that the Pakistan doesn’t take it all the way.

It’s impossible to predict what actions Pakistan might view as sufficient provocation, so India has tended to play it safe. Don’t capture territory, don’t attack military assets, don’t kill civilians. In other words, surgical strikes on alleged terrorist camps is the most we can do.

Given that Pakistan knows that it is irrational for India to react, and our leaders tend to be rational, they can ‘bleed us with a thousand cuts’, as their doctrine states, with impunity. Both in 2001, when our parliament was attacked and the BJP’s Atal Bihari Vajpayee was PM, and in 2008, when Mumbai was attacked and the Congress’s Manmohan Singh was PM, our leaders considered all the options on the table—but were forced to do nothing.

But is doing nothing an option in an election year?

Leave strategy aside and turn to politics. India has been attacked. Forty soldiers have been killed, and the nation is traumatised and baying for blood. It is now politically impossible to not retaliate—especially for a PM who has criticized his predecessor for being weak, and portrayed himself as a 56-inch-chested man of action.

I have no doubt that Modi is a rational man, and knows the possible consequences of escalation. But he also knows the possible consequences of not escalating—he could dilute his brand and lose the elections. Thus, he is forced to act. And after he acts, his Pakistan counterpart will face the same domestic pressure to retaliate, and will have to attack back. And so on till my home in Versova is swallowed up by a nuclear crater, right?

Well, not exactly. There is a way to resolve this paradox. India and Pakistan can both escalate, not via military actions, but via optics.

Modi and Imran Khan, who you’d expect to feel like the loneliest men on earth right now, can find sweet company in each other. Their incentives are aligned. Neither man wants this to turn into a full-fledged war. Both men want to appear macho in front of their domestic constituencies. Both men are masters at building narratives, and have a pliant media that will help them.

Thus, India can carry out a surgical strike and claim it destroyed a camp, killed terrorists, and forced Pakistan to return a braveheart prisoner of war. Pakistan can say India merely destroyed two trees plus a rock, and claim the high moral ground by returning the prisoner after giving him good masala tea. A benign military equilibrium is maintained, and both men come out looking like strong leaders: a win-win game for the PMs that avoids a lose-lose game for their nations. They can give themselves a high-five in private when they meet next, and Imran can whisper to Modi, “You’re a good spinner, bro.”

There is one problem here, though: what if the optics don’t work?

If Modi feels that his public is too sceptical and he needs to do more, he might feel forced to resort to actual military escalation. The fog of politics might obscure the possible consequences. If the resultant Indian military action causes serious damage, Pakistan will have to respond in kind. In the chain of events that then begins, with body bags piling up, neither man may be able to back down. They could end up as prisoners of circumstance—and so could we.

***

Also check out:

Why Modi Must Learn to Play the Game of Chicken With Pakistan—Amit Varma
The Two Pakistans—Episode 79 of The Seen and the Unseen
India in the Nuclear Age—Episode 80 of The Seen and the Unseen

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.




ng

Population Is Not a Problem, but Our Greatest Strength

This is the 21st installment of The Rationalist, my column for the Times of India.

When all political parties agree on something, you know you might have a problem. Giriraj Singh, a minister in Narendra Modi’s new cabinet, tweeted this week that our population control law should become a “movement.” This is something that would find bipartisan support – we are taught from school onwards that India’s population is a big problem, and we need to control it.

This is wrong. Contrary to popular belief, our population is not a problem. It is our greatest strength.

The notion that we should worry about a growing population is an intuitive one. The world has limited resources. People keep increasing. Something’s gotta give.

Robert Malthus made just this point in his 1798 book, An Essay on the Principle of Population. He was worried that our population would grow exponentially while resources would grow arithmetically. As more people entered the workforce, wages would fall and goods would become scarce. Calamity was inevitable.

Malthus’s rationale was so influential that this mode of thinking was soon called ‘Malthusian.’ (It is a pejorative today.) A 20th-century follower of his, Harrison Brown, came up with one of my favourite images on this subject, arguing that a growing population would lead to the earth being “covered completely and to a considerable depth with a writhing mass of human beings, much as a dead cow is covered with a pulsating mass of maggots.”

Another Malthusian, Paul Ehrlich, published a book called The Population Bomb in 1968, which began with the stirring lines, “The battle to feed all of humanity is over. In the 1970s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now.” Ehrlich was, as you’d guess, a big supporter of India’s coercive family planning programs. ““I don’t see,” he wrote, “how India could possibly feed two hundred million more people by 1980.”

None of these fears have come true. A 2007 study by Nicholas Eberstadt called ‘Too Many People?’ found no correlation between population density and poverty. The greater the density of people, the more you’d expect them to fight for resources – and yet, Monaco, which has 40 times the population density of Bangladesh, is doing well for itself. So is Bahrain, which has three times the population density of India.

Not only does population not cause poverty, it makes us more prosperous. The economist Julian Simon pointed out in a 1981 book that through history, whenever there has been a spurt in population, it has coincided with a spurt in productivity. Such as, for example, between Malthus’s time and now. There were around a billion people on earth in 1798, and there are around 7.7 billion today. As you read these words, consider that you are better off than the richest person on the planet then.

Why is this? The answer lies in the title of Simon’s book: The Ultimate Resource. When we speak of resources, we forget that human beings are the finest resource of all. There is no limit to our ingenuity. And we interact with each other in positive-sum ways – every voluntary interactions leaves both people better off, and the amount of value in the world goes up. This is why we want to be part of economic networks that are as large, and as dense, as possible. This is why most people migrate to cities rather than away from them – and why cities are so much richer than towns or villages.

If Malthusians were right, essential commodities like wheat, maize and rice would become relatively scarcer over time, and thus more expensive – but they have actually become much cheaper in real terms. This is thanks to the productivity and creativity of humans, who, in Eberstadt’s words, are “in practice always renewable and in theory entirely inexhaustible.”

The error made by Malthus, Brown and Ehrlich is the same error that our politicians make today, and not just in the context of population: zero-sum thinking. If our population grows and resources stays the same, of course there will be scarcity. But this is never the case. All we need to do to learn this lesson is look at our cities!

This mistaken thinking has had savage humanitarian consequences in India. Think of the unborn millions over the decades because of our brutal family planning policies. How many Tendulkars, Rahmans and Satyajit Rays have we lost? Think of the immoral coercion still carried out on poor people across the country. And finally, think of the condescension of our politicians, asserting that people are India’s problem – but always other people, never themselves.

This arrogance is India’s greatest problem, not our people.

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.




ng

Trump and Modi are playing a Lose-Lose game

This is the 22nd installment of The Rationalist, my column for the Times of India.

Trade wars are on the rise, and it’s enough to get any nationalist all het up and excited. Earlier this week, Narendra Modi’s government announced that it would start imposing tariffs on 28 US products starting today. This is a response to similar treatment towards us from the US.

There is one thing I would invite you to consider: Trump and Modi are not engaged in a war with each other. Instead, they are waging war on their own people.

Let’s unpack that a bit. Part of the reason Trump came to power is that he provided simple and wrong answers for people’s problems. He responded to the growing jobs crisis in middle America with two explanations: one, foreigners are coming and taking your jobs; two, your jobs are being shipped overseas.

Both explanations are wrong but intuitive, and they worked for Trump. (He is stupid enough that he probably did not create these narratives for votes but actually believes them.) The first of those leads to the demonising of immigrants. The second leads to a demonising of trade. Trump has acted on his rhetoric after becoming president, and a modern US version of our old ‘Indira is India’ slogan might well be, “Trump is Tariff. Tariff is Trump.”

Contrary to the fulminations of the economically illiterate, all tariffs are bad, without exception. Let me illustrate this with an example. Say there is a fictional product called Brump. A local Brump costs Rs 100. Foreign manufacturers appear and offer better Brumps at a cheaper price, say Rs 90. Consumers shift to foreign Brumps.

Manufacturers of local Brumps get angry, and form an interest group. They lobby the government – or bribe it with campaign contributions – to impose a tariff on import of Brumps. The government puts a 20-rupee tariff. The foreign Brumps now cost Rs 110, and people start buying local Brumps again. This is a good thing, right? Local businesses have been helped, and local jobs have been saved.

But this is only the seen effect. The unseen effect of this tariff is that millions of Brump buyers would have saved Rs 10-per-Brump if there were no tariffs. This money would have gone out into the economy, been part of new demand, generated more jobs. Everyone would have been better off, and the overall standard of living would have been higher.

That brings to me to an essential truth about tariffs. Every tariff is a tax on your own people. And every intervention in markets amounts to a distribution of wealth from the people at large to specific interest groups. (In other words, from the poor to the rich.) The costs of this are dispersed and invisible – what is Rs 10 to any of us? – and the benefits are large and worth fighting for: Local manufacturers of Brumps can make crores extra. Much modern politics amounts to manufacturers of Brumps buying politicians to redistribute money from us to them.

There are second-order effects of protectionism as well. When the US imposes tariffs on other countries, those countries may respond by imposing tariffs back. Raw materials for many goods made locally are imported, and as these become expensive, so do those goods. That quintessential American product, the iPhone, uses parts from 43 countries. As local products rise in price because of expensive foreign parts, prices rise, demand goes down, jobs are lost, and everyone is worse off.

Trump keeps talking about how he wants to ‘win’ at trade, but trade is not a zero-sum game. The most misunderstood term in our times is probably ‘trade-deficit’. A country has a trade deficit when it imports more than what it exports, and Trump thinks of that as a bad thing. It is not. I run a trade deficit with my domestic help and my local grocery store. I buy more from them than they do from me. That is fine, because we all benefit. It is a win-win game.

Similarly, trade between countries is really trade between the people of both countries – and people trade with each other because they are both better off. To interfere in that process is to reduce the value created in their lives. It is immoral. To modify a slogan often identified with libertarians like me, ‘Tariffs are Theft.’

These trade wars, thus, carry a touch of the absurd. Any leader who imposes tariffs is imposing a tax on his own people. Just see the chain of events: Trump taxes the American people. In retaliation, Modi taxes the Indian people. Trump raises taxes. Modi raises taxes. Nationalists in both countries cheer. Interests groups in both countries laugh their way to the bank.

What kind of idiocy is this? How long will this lose-lose game continue?

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.




ng

For this Brave New World of cricket, we have IPL and England to thank

This is the 24th installment of The Rationalist, my column for the Times of India.

Back in the last decade, I was a cricket journalist for a few years. Then, around 12 years ago, I quit. I was jaded as hell. Every game seemed like déjà vu, nothing new, just another round on the treadmill. Although I would remember her fondly, I thought me and cricket were done.

And then I fell in love again. Cricket has changed in the last few years in glorious ways. There have been new ways of thinking about the game. There have been new ways of playing the game. Every season, new kinds of drama form, new nuances spring up into sight. This is true even of what had once seemed the dullest form of the game, one-day cricket. We are entering into a brave new world, and the team leading us there is England. No matter what happens in the World Cup final today – a single game involves a huge amount of luck – this England side are extraordinary. They are the bridge between eras, leading us into a Golden Age of Cricket.

I know that sounds hyperbolic, so let me stun you further by saying that I give the IPL credit for this. And now, having woken up you up with such a jolt on this lovely Sunday morning, let me explain.

Twenty20 cricket changed the game in two fundamental ways. Both ended up changing one-day cricket. The first was strategy.

When the first T20 games took place, teams applied an ODI template to innings-building: pinch-hit, build, slog. But this was not an optimal approach. In ODIs, teams have 11 players over 50 overs. In T20s, they have 11 players over 20 overs. The equation between resources and constraints is different. This means that the cost of a wicket goes down, and the cost of a dot ball goes up. Critically, it means that the value of aggression rises. A team need not follow the ODI template. In some instances, attacking for all 20 overs – or as I call it, ‘frontloading’ – may be optimal.

West Indies won the T20 World Cup in 2016 by doing just this, and England played similarly. And some sides began to realise was that they had been underestimating the value of aggression in one-day cricket as well.

The second fundamental way in which T20 cricket changed cricket was in terms of skills. The IPL and other leagues brought big money into the game. This changed incentives for budding cricketers. Relatively few people break into Test or ODI cricket, and play for their countries. A much wider pool can aspire to play T20 cricket – which also provides much more money. So it makes sense to spend the hundreds of hours you are in the nets honing T20 skills rather than Test match skills. Go to any nets practice, and you will find many more kids practising innovative aggressive strokes than playing the forward defensive.

As a result, batsmen today have a wider array of attacking strokes than earlier generations. Because every run counts more in T20 cricket, the standard of fielding has also shot up. And bowlers have also reacted to this by expanding their arsenal of tricks. Everyone has had to lift their game.

In one-day cricket, thus, two things have happened. One, there is better strategic understanding about the value of aggression. Two, batsmen are better equipped to act on the aggressive imperative. The game has continued to evolve.

Bowlers have reacted to this with greater aggression on their part, and this ongoing dialogue has been fascinating. The cricket writer Gideon Haigh once told me on my podcast that the 2015 World Cup featured a battle between T20 batting and Test match bowling.

This England team is the high watermark so far. Their aggression does not come from slogging. They bat with a combination of intent and skills that allows them to coast at 6-an-over, without needing to take too many risks. In normal conditions, thus, they can coast to 300 – any hitting they do beyond that is the bonus that takes them to 350 or 400. It’s a whole new level, illustrated by the fact that at one point a few days ago, they had seven consecutive scores of 300 to their name. Look at their scores over the last few years, in fact, and it is clear that this is the greatest batting side in the history of one-day cricket – by a margin.

There have been stumbles in this World Cup, but in the bigger picture, those are outliers. If England have a bad day in the final and New Zealand play their A-game, England might even lose today. But if Captain Morgan’s men play their A-game, they will coast to victory. New Zealand does not have those gears. No other team in the world does – for now.

But one day, they will all have to learn to play like this.

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.




ng

Virtuoso Studio: Simplified Review of Operating Point Parameter Values

Read on to know about the Operating Point Parameters Summary window that gives you a one-stop view of the categorized and tabulated details on all operating point parameters in your design. This window improves your review cycle with its many benefits.(read more)



  • Analog Design Environment
  • Operating point summary window
  • Virtuoso Studio
  • Operating Point Information
  • Virtuoso Analog Design Environment
  • Custom IC Design
  • Virtuoso ADE Explorer
  • Virtuoso ADE Assembler
  • IC23.1

ng

Start Your Engines: Optimizing Mixed-Signal Simulation Efficiency

During a mixed-signal simulation, the analog engine usually dominates the simulation time and resources. If you need to run only the analog engine in several windows, or if you would like to to run multiple tests of the same circuit with different stimuli or test pattern, then you need to run the simulation multiple times. View this blog to know more about the the two advanced technologies that Spectre AMS Designer provides to help you improve the efficiency of your mixed-signal designs and to increase the simulation speed.(read more)




ng

Start Your Engines: Create and Insert Connect Modules for Mixed-Signal Verification

Read this blog to know how you can easily create and insert connect modules using Spectre AMS Designer with the Verilog-AMS standard language defined by Accellera. (read more)




ng

Doc Assistant A-Z: Making the Most of the Cadence Cloud-Based Help Viewer: Pt. 2

At a bustling Cadence event, we met Adrian, an intern at a startup who immerses himself in Cadence tools for his research and work.

Adrian was enthusiastic about the innovative technologies at his disposal but faced a significant challenge: internet access was limited to a single machine for new joiners, forcing interns to wait in line for their turn to use online resources.

Adrian's excitement soared when he discovered a game-changing solution: Doc Assistant. The cloud-based help viewer, Doc Assistant, ships with all Cadence tools, enabling Adrian to access help resources offline from any machine equipped with the software. This meant Adrian could continue his research and work seamlessly, irrespective of internet availability!

Meeting Cadence users and customers at such events has given us the opportunity to showcase how they can benefit from the diverse features that Doc Assistant offers.

With that note, welcome back to our Doc Assistant A-Z blog series! In Part 1, we explored key features and benefits that our innovative viewer brings to the table. Today, in Part 2, we'll dive deeper into the advanced functionalities and customization options that make Doc Assistant indispensable for its users.

Whether you're looking to streamline your workflow or enhance your user experience, this blog will provide the insights you need to fully leverage the capabilities of our documentation viewer. Let’s get started!

What Makes Doc Assistant Stand Out?

Here are a few (more) cool features of Doc Assistant!

History and Bookmarks: Want to refer to the topic you read last week? Of course, you can! Doc Assistant stores your browsing activity as History. You can also bookmark topics and revisit them later.

Indexing Capabilities: Looking for seamless search capabilities? The advanced indexing capabilities of Doc Assistant enhance the accessibility and manageability of documents. Doc Assistant automatically creates a search index if it is missing or broken.

Jump Links: Worried about scrolling through lengthy topics? Fret no more! Use the jump links in each topic to quickly navigate to different sections within the same topic or across topics. Jump links reduce the need for excessive scrolling and let you access relevant content swiftly.

Just-in-Time Notifications: Looking for alerts and messages? That’s supported. Doc Assistant displays notifications about important events, including errors, warnings, information, and success messages.

Keyword-Based Search Suggestions: You somewhat know your search keyword, but not quite sure? No worries. Just start typing what you know. Keyword and page suggestions are displayed dynamically as you type, providing a more sophisticated and intuitive search experience.

Library-Switch Support: Want to view documents from other libraries? Doc Assistant, by default, displays documents for the currently active release in your machine. You can access documents from other releases by configuring the associated documentation libraries.

Multimedia Support: Want to view product demos? Multimedia support in Doc Assistant lets you play videos, listen to audio, and view images without opening any external application.

Navigation Made Easy: Worried that you’ll get lost in an infinite doc loop? Not at all. The intuitive navigation controls in Doc Assistant are designed to provide you with a fluid and efficient experience. The Doc Assistant user interface is clean and logically organized, with easy-to-access documentation links.

That's not all. We have more coming your way. Until next time, take care and stay tuned for our next edition!

Want to Know More?

Here's a video about Doc Assistant
Visit the Doc Assistant web page
Read the Doc Assistant FAQ document

For any questions or general feedback, write to docassistant.support@cadence.com.

Subscribe to receive email notifications about our latest Custom IC Design blog posts.

Happy reading!

-Priya Sriram, on behalf of the Doc Assistant Team




ng

Knowledge Booster Training Bytes - Writing Physical Verification Language Rules

Have you ever wanted to write a DRC rule deck to check for space or width constraints on polygons? Or have you wondered how the multiple lines of an LVS rule deck extract and conduct a comparison between the schematic and layout? Maybe you've been curious about the role of rule deck writers in creating high-quality designs ready for tape-out.

If any of these questions interest you, there is good news: the latest version (v23.1) of the Physical Verification Rules Writer (PVLRW) course is designed to teach you rule deck writing. This free 16-hour online course includes audio and labs designed to make your learning experience comfortable and flexible. Whether you are new to the concept or an experienced CAD/PDK engineer, the course is structured to enhance your rule deck writing skills.

The PVLRW course covers six core modules: Layer Processing, DRC Rules, Layout Extraction, ERC and LVS Rules, Schematic Netlisting, and Coloring Rules. There are also three optional appendix sections. Each module explains relevant rules with syntax, concepts, graphics, examples, and case studies.

This course is based on tool versions PEGASUS231 and Virtuoso Studio IC231.

Pegasus Input and Output

Pegasus is a cloud-ready physical verification signoff solution that enables engineers to support faster delivery of advanced-node integrated circuits (ICs) to market.

Pegasus requires input data in the form of layout geometry, schematic netlists, and rules that direct the tool operation. The rules fall into two categories: those that describe the fabrication process and those that control the job-specific operation.

Pegasus provides log and report files, netlists, databases, and error databases as output.

Overview of Pegasus Rule File

The rule decks written in Physical Verification Language (PVL) work for the Cadence PV signoff tools Pegasus and PVS (Physical Verification System).   

The PVL rules are placed in a file that gets selected in a run from the GUI or the command line, as the user directs. PVL rules may be on separate lines within the file and can also be contained in named rule blocks.

Each line of code starts with a PVL rule that uses prefix type notation. It consists of a keyword followed by options, input layer or variable names, and output layer or variable names.

A rule block has the format of the keyword rule, followed by a rule name you wish to give it, followed by an opening curly brace. You enter the rules you wish to perform, followed by a closing curly brace on the last separate line.

  Sample Rule deck with individual lines of code and rule blocks.

DRC Rules

The first step in a typical Pegasus flow is a Design Rule Check (DRC), which verifies that layout geometries conform to the minimum width, spacing, and other fabrication process rules required by an IC foundry. Each foundry specifies its own process-dependent rules that must be met by the layout design.

There are three types of DRC rules: layer definition rules, layer derivation rules, and DRC design check rules. Layer definition rules identify the layers contained in the input layout database, and layer derivation rules derive additional layers from the original input layers, allowing the tool to test the design against specific foundry requirements using the design check rules.

A sample DRC Rule deck

A layout view displaying the DRC violations

LVS Rules

The Pegasus Layout Versus Schematic (LVS) tool compares the layout netlist with the schematic netlist to check for discrepancies.

There are two essential LVS rule sets: LVS extraction rules and comparison rules. LVS extraction rules help extract drawn devices and connectivity information from the input layout geometry data and outputs into a layout netlist. The LVS extraction rule set also includes the layer definition, derivation, extraction, connectivity, and net listing rules.

LVS comparison rules are associated with comparing the extracted layout netlist to a schematic netlist.

A sample LVS Rule deck. 

TCL, Macros, and Conditional commands

Tcl is supported and used in various Pegasus functionalities, such as Pegasus rule files and Pegasus configurator. Macros are functional templates that are defined once and can be used multiple times in a rule file. Conditional Commands are used to process or skip specific commands in the rule file.

Do You Have Access to the Cadence Support Portal?

If not, follow the steps below to create your account.

  • On the Cadence Support portal, select Register Now and provide the requested information on the Registration page.
  • You will need an email address and host ID to sign up.
  • If you need help with registration, contact support@cadence.com.

To stay up to date with the latest news and information about Cadence training and webinars, subscribe to the Cadence Training emails.

If you have questions about courses, schedules, online, public, or live onsite training, reach out to us at Cadence Training.

For any questions, general feedback, or future blog topic suggestions, please leave a comment.

Related Resources

Product Manuals

Cadence Pegasus Developers Guide

Rapid Adoption Kits     Running Pegasus DRC/LVS/FILL in Batch Mode
Training Byte Videos

What Is the Run Command File?

How to Run PVS-Pegasus Jobs in GUI and Batch modes?

PVS DRC Run From - Setup Rules

What Is PVS/Pegasus Layer Viewer?

PVL Coloring Ruledecks with Docolor and Stitchcolor 

PLV Commands: dfm_property with Primary & Secondary Layer

PVS Quantus QRC Overview 

Online Courses

Pegasus Verification System

PVS (Physical Verification System)

Virtuoso Layout Design Basics

About Knowledge Booster Training Bytes

Knowledge Booster Training Bytes is an online journal that relays information about Cadence Training videos, online courses, and upcoming webinars in the Learning section of the Cadence Learning and Support portal. This blog category brings you direct links to these videos, courses, and other related material on a regular basis. Subscribe to receive email notifications about our latest Custom IC Design blog posts.




ng

Start Your Engines: The Innovation Behind Universal Connect Modules (UCM)

Read this blog to know more about the innovation behind Universal Connect Modules (UCM).(read more)




ng

Virtuoso Studio IC 23.1: Using Net Tracer for Design Review

This blog explores how Virtuoso Studio Net Tracer can help you perform a design review.

We’ll use the net connectivity option, which allows the user to get a clean highlighted net. You can use the Net Tracer tool to highlight the nets. You can find the Net Tracer command under the connectivity pulldown menu in the layout window.

Trace manager and the ability to display different islands on the same net with other colors, you can identify and connect the unconnected islands as you wish.

The Net Tracer utility traces the nets in the physical view (layout). The trace is a highlighted net, which is a non-selectable object. The Net Tracer utility is available from Virtuoso Layout Suite XL onwards. You can use this utility based on your specific needs and preferences.

For a better understanding of the Net Tracer feature, let’s see one scenario between the circuit designer and layout engineer for a layout design review.

Circuit designer: Can we go through the routed input nets “inm” and “inp”?

Layout engineer: From the below layout view where they are highlighted using the XL connectivity, today I will use Net Tracer utility for the design review.

Circuit designer: I have never heard of this feature. Let's see how it works.

Layout engineer: Sure, now we turn on the Net Tracer toolbar using the below option.

You see the Net Tracer options form here:

As you can see on my screen, I have opened the layout view and engaged the Net Tracer utility.

Net Tracer allows shapes to be traced on a net in two tracing modes, namely, physical and logical, where shapes on the same net are physically or logically connected.

Physical tracing gathers all the shapes physically connected on the same net.

Logical tracing gathers all the shapes assigned to the same net. It highlights the net as in the source design (schematic). It will highlight shapes on the same net, even if they are isolated shapes that are not physically connected.

For this scenario, let us use physical tracing for input nets “inm” and “inp."

Highlighted nets are shown below:

Net “inm”                    Net “inp”                   Nets “inm” and “inp” 

      

Net Tracer has features like physical and logical tracing, preview, step-by-step mode, ease of tracing a net on a shape out of multiple underlying shapes, and so on.

Let us explore logical tracing for output nets “outm” and “outp”:

Here, you can see how to enable true color and halo before enabling logical tracing to identify the metal route. After enabling the true color halo, enable the logical trace.

Here, I am opening the trace manager to search “outm” and “outp” and click trace. That will trace the particular nets as shown.

Net Tracer has a preview feature, which is helpful in terms of the number of previewed objects. This preview capability hints at how the trace would appear when you create it. This useful feature in Virtuoso Studio highlights both completed and incomplete nets, helping the user better understand the status of the highlighted nets.

Circuit designer: Thanks for the design review. You have done good work. Net Tracer clearly shows both types of tracing, and it was even easy for the circuit designer to understand.

Layout engineer: Let me share the link to the Net Tracer RAK, where other layout engineers can explore many more amazing features of the Net Tracer.

Do You Have Access to the Cadence Support Portal?

If not, follow the steps below to create your account.

  • On the Cadence Support portal, select Register Now and provide the requested information on the Registration page.
  • You will need an email address and host ID to sign up.
  • If you need help with registration, contact support@cadence.com.

To stay up to date with the latest news and information about Cadence training and webinars, subscribe to the Cadence Training emails.

If you have questions about courses, schedules, online, public, or live onsite training, reach out to us at Cadence Training.

For any questions, general feedback, or future blog topic suggestions, please leave a comment.

Become Cadence Certified

Cadence Training Services now offers digital badges for this training course. These badges indicate proficiency in a certain technology or skill and give you a way to validate your expertise to managers and potential employers. You can highlight your expertise by adding these digital badges to your email signature or any social media platform, such as Facebook or LinkedIn. To become Cadence Certified, you can find additional information here.

Related Resources

 Videos

Invoking the MarkNet, Net Tracer command and its options

Net Tracer Features

Video: Net Tracer saving and loading saved trace, neighboring shapes of trace

Net Tracer: Physical Tracing – Step mode

Net Tracer: Physical and Logical Tracing

Video: Net Tracer show preview option, from net and display options, shape count in trace

Video: Net Tracer using a constraint group with different display mode settings and  using the Trace Manager GUI

 RAK

Introduction to Net Tracer

 Product manual

Virtuoso Layout Suite XL: Connectivity Driven Editing User Guide IC23.1

About Knowledge Booster Training Bytes

Knowledge Booster Training Bytes is an online journal that relays information about Cadence Training videos, online courses, and upcoming webinars that are available in the Learning section of the Cadence Learning and Support portal. This blog category brings you direct links to these videos, courses, and other related material on a regular basis.

Sandhya.

On behalf of the Cadence Training team




ng

Doc Assistant A-Z: Making the Most of the Cadence Cloud-Based Help Viewer Part 3

Welcome back to the Doc Assistant A-Z blog series!

Since the launch of Doc Assistant, we've been gathering feedback and input from our customers regarding their experiences with our latest documentation viewer. My interaction with Ralf was particularly useful and interesting.

Ralf is a design engineer who works on complex schematics and intricate layouts. For each release, he is challenged with the task of verifying the tool and feature changes across multiple releases. He shared with me that he has been using Doc Assistant’s capabilities to help him achieve this.

Ralf explained that he utilizes Doc Assistant to open and compare documents from different releases side-by-side, seamlessly tracking updates across multiple releases and verifying those updates in his Cadence tools. Additionally, in Doc Assistant’s online mode, he compares documents across previous tool versions, ensuring a thorough review of any changes. Finally, he was happy to share with me that Doc Assistant features have helped him significantly reduce the time he spends on identifying such changes.

You, of course, can also achieve such productivity gains using several Doc Assistant features designed to help simplify such tasks!

In previous editions of this blog series, we looked at some key features and benefits of Doc Assistant. If you've missed these editions, I would highly recommend that you read them:

In this third installment, we're diving into some more of Doc Assistant's key capabilities.

Open Multiple Documents

Want to refer to multiple docs at the same time? That’s easy!

Open each doc on a separate tab in Doc Assistant. 

Personalized Content Recommendations

Is it a hassle to navigate through all docs each time? You don’t have to.

You can tailor your Doc Assistant preferences to match your content requirements.

PDF Support

Do you prefer downloading and reading a PDF instead of an HTML?

That’s also supported.

Quick Access to Relevant Search Results

Are you pressed for time, and yet want to run a comprehensive doc search? You’re covered.

In online mode, search runs on all available product documentation, and the results are listed from multiple sources.

Resource Links

Looking for more information about a topic you’ve just read? That’s handy.

Look out for content recommendations!

Share Content

Want to share a useful doc with the rest of your team? That’s easy.

With a single click, Doc Assistant lets you share content with one or more readers.

Submit Feedback

Your feedback is important to us. Use the Submit Feedback feature to share your comments and inputs.

To learn more about how to use the above features, check out the Doc Assistant User Guide.

These are just a few of the productivity gain features in Doc Assistant. We’ll cover more in the next blog in the series.

Want to Know More?

Here's a video about Doc Assistant
Visit the Doc Assistant web page
Read the Doc Assistant FAQ document

If you have any feedback on Doc Assistant or would like to request more information or a demo, please contact docassistant.support@cadence.com.

Subscribe to receive email notifications about our latest Custom IC Design blog posts.

Happy reading!

Priya Sriram, on behalf of the Doc Assistant Team




ng

PCB Chamfering Board edge connectors

Hi 

I am looking into chamfering the edge of PCB for Board edge connectors. I have performed fillet command earlier but new to chamfering.

Below is the description :

As seen above, the PCB edge are chamfered in thickness as well as at the corners.

Using OrCAD PCB hotfix S023.




ng

Purging duplicate vias in pcb editor

How do we purge/remove the duplicated vias in the same location of the PCB editor? These vias are not the one stacked and they are just blind vias running in internal layers 12-14. I find there is an additional copy of the blind via at the same location. Not sure what caused this issue.




ng

Orcad PCB (allegro) not using GPU over USB

Hi,

I have a monitor plugged to my laptop using a HDMI to USB adapter. When using this adapter, Allegro runs very slowly. It seems that it is not using my video card.

Is this a known issue with a workaround I can try?

Thanks,

Michael




ng

17.4 Design Sync Fails without providing errors

As the title suggests I am unable to perform design sync between OrCAD Capture and Allegro. When I add a layout and try to sync to it I am given ERROR(ORCAP-2426): Cannot run Design Sync because of errors. See session log for error details.

Session Log

[ORPCBFLOW] : Invoking ECO dialog.
INFO(ORNET-1176): Netlisting the design
INFO(ORNET-1178): Design Name:
C:USERSDDOYLEDOCUMENTSCADENCEBOARDSREMOTE POWER DEVICECAPTUREREMOTE_POWER_DEVICE.DSN
Netlist Directory:
c:usersddoyledocumentscadenceoards emote power devicelayoutallegro
Configuration File:
C:CadenceSPB_17.4 ools/capture/allegro.cfg
pstswp.exe - pst - d "C:USERSDDOYLEDOCUMENTSCADENCEBOARDSREMOTE POWER DEVICECAPTUREREMOTE_POWER_DEVICE.DSN"- n "c:usersddoyledocumentscadenceoards emote power devicelayoutallegro" - c "C:CadenceSPB_17.4 ools/capture/allegro.cfg" - v 3 - l 31 - s "" - j "PCB Footprint" - hpath "HPathForCollision"
Spawning... pstswp.exe - pst - d "C:USERSDDOYLEDOCUMENTSCADENCEBOARDSREMOTE POWER DEVICECAPTUREREMOTE_POWER_DEVICE.DSN"- n "c:usersddoyledocumentscadenceoards emote power devicelayoutallegro" - c "C:CadenceSPB_17.4 ools/capture/allegro.cfg" - v 3 - l 31 - s "" - j "PCB Footprint" - hpath "HPathForCollision"
{ Using PSTWRITER 17.4.0 d001Dec-14-2021 at 09:00:49 }

INFO(ORCAP-36080): Scanning netlist files ...

Loading... c:usersddoyledocumentscadenceoards emote power devicelayoutallegropstchip.dat

Loading... c:usersddoyledocumentscadenceoards emote power devicelayoutallegropstchip.dat

Loading... c:usersddoyledocumentscadenceoards emote power devicelayoutallegropstxprt.dat

Loading... c:usersddoyledocumentscadenceoards emote power devicelayoutallegropstxnet.dat
packaging the design view...
Exiting... pstswp.exe - pst - d "C:USERSDDOYLEDOCUMENTSCADENCEBOARDSREMOTE POWER DEVICECAPTUREREMOTE_POWER_DEVICE.DSN"- n "c:usersddoyledocumentscadenceoards emote power devicelayoutallegro" - c "C:CadenceSPB_17.4 ools/capture/allegro.cfg" - v 3 - l 31 - s "" - j "PCB Footprint" - hpath "HPathForCollision"
INFO(ORNET-1179): *** Done ***

This issue started to occur after I changed parts that exist on previously created PCBs. I changed the following leading up to this:

1. Added height in Allegro to many of my components using the Setup->Area->Package Height tool.

2. Changed the reference designator category in OrCAD Capture to TP for several components on board.

Any advice here would be most welcome. Thanks!




ng

Allegro part of DPI does not support scaling above 150%

Allegro part of DPI does not support scaling above 150%




ng

Migrating from files Orcad Layout 16.2

I have managed to convert our old schematic and PCD file to from Layout 16.2 to 17.4

I have exported the footprints and moved them to the correct lib directory. 

I get no DRC errors and I can build a new netlist file. The problem is I can't get the PCB editor to update using the new netlist and get the following error:

I cannot figure out how to fix the Name is too long error. 

(---------------------------------------------------------------------)
(                                                                     )
(    Allegro Netrev Import Logic                                      )
(                                                                     )
(    Drawing          : 70055R2.brd                                   )
(    Software Version : 17.4S023                                      )
(    Date/Time        : Tue Dec 14 18:54:25 2021                      )
(                                                                     )
(---------------------------------------------------------------------)


------ Directives ------------

Ripup etch:                  Yes
Ripup delete first segment:  No
Ripup retain bondwire:       No
Ripup symbols:               IfSame
Missing symbol has error:    No
DRC update:                  Yes
Schematic directory:         'C:/AFS/70055 PCB Test 2'
Design Directory:            'C:/AFS/70055 PCB Test 2'
Old design name:             'C:/AFS/70055 PCB Test 2/70055R2.brd'
New design name:             'C:/AFS/70055 PCB Test 2/70055R2.brd'

CmdLine: netrev -$ -i C:/AFS/70055 PCB Test 2 -x -u -t -y 2 -h -z -q netrev_constraint_report.xml C:/AFS/70055 PCB Test 2/#Taaaaae57776.tmp

------ Preparing to read pst files ------

Starting to read C:/AFS/70055 PCB Test 2/pstchip.dat 
   Finished reading C:/AFS/70055 PCB Test 2/pstchip.dat (00:00:00.02)
Starting to read C:/AFS/70055 PCB Test 2/pstxprt.dat 
   Finished reading C:/AFS/70055 PCB Test 2/pstxprt.dat (00:00:00.00)
Starting to read C:/AFS/70055 PCB Test 2/pstxnet.dat 
   Finished reading C:/AFS/70055 PCB Test 2/pstxnet.dat (00:00:00.00)

------ Oversights/Warnings/Errors ------


#1   ERROR(SPMHNI-176): Device library error detected.

ERROR(SPMHNI-189): Problems with the name of device 'SW DPDT_9_SWITCH_OTTO_ALT_SW DPDT': 'Name is too long.'.

ERROR(SPMHNI-170): Device 'SW DPDT_9_SWITCH_OTTO_ALT_SW DP' has library errors. Unable to transfer to Allegro.

#2   ERROR(SPMHNI-176): Device library error detected.

ERROR(SPMHNI-189): Problems with the name of device 'SW DPDT_10_SWITCH_OTTO_LIGHTS_SW DPDT': 'Name is too long.'.

ERROR(SPMHNI-170): Device 'SW DPDT_10_SWITCH_OTTO_LIGHTS_S' has library errors. Unable to transfer to Allegro.

#3   ERROR(SPMHNI-176): Device library error detected.

ERROR(SPMHNI-189): Problems with the name of device 'SW DPDT_7_SWITCH_OTTO_ALT_SW DPDT': 'Name is too long.'.

ERROR(SPMHNI-170): Device 'SW DPDT_7_SWITCH_OTTO_ALT_SW DP' has library errors. Unable to transfer to Allegro.

#4   ERROR(SPMHNI-176): Device library error detected.

ERROR(SPMHNI-189): Problems with the name of device 'SW DPDT_3_SWITCH_OTTO_MASTER_SW DPDT': 'Name is too long.'.

ERROR(SPMHNI-170): Device 'SW DPDT_3_SWITCH_OTTO_MASTER_SW' has library errors. Unable to transfer to Allegro.

#5   ERROR(SPMHNI-176): Device library error detected.

ERROR(SPMHNI-189): Problems with the name of device 'SW DPDT_6_SWITCH_OTTO_LIGHTS_SW DPDT': 'Name is too long.'.

ERROR(SPMHNI-170): Device 'SW DPDT_6_SWITCH_OTTO_LIGHTS_SW' has library errors. Unable to transfer to Allegro.

#6   ERROR(SPMHNI-176): Device library error detected.

ERROR(SPMHNI-189): Problems with the name of device 'SW DPDT_3_SWITCH_OTTO_MASTER_DPDT': 'Name is too long.'.

ERROR(SPMHNI-170): Device 'SW DPDT_3_SWITCH_OTTO_MASTER_DP' has library errors. Unable to transfer to Allegro.

#7   ERROR(SPMHNI-176): Device library error detected.

ERROR(SPMHNI-189): Problems with the name of device 'CONNECTOR DB15_DSUBVPTM15_CONNECTOR DB15': 'Name is too long.'.

ERROR(SPMHNI-170): Device 'CONNECTOR DB15_DSUBVPTM15_CONNE' has library errors. Unable to transfer to Allegro.

#8   ERROR(SPMHNI-176): Device library error detected.

ERROR(SPMHNI-189): Problems with the name of device 'CONNECTOR DB9_DSUBVPTM9_CONNECTOR DB9': 'Name is too long.'.

ERROR(SPMHNI-170): Device 'CONNECTOR DB9_DSUBVPTM9_CONNECT' has library errors. Unable to transfer to Allegro.

#9   ERROR(SPMHNI-175): Netrev error detected.

ERROR(SPMHDB-195): Error processing 'M6': Text line is outside of the extents..

------ Library Paths ------
MODULEPATH =  . 
           C:/Cadence/SPB_17.4/share/local/pcb/modules 

PSMPATH =  . 
           symbols 
           .. 
           ../symbols 
           C:/Cadence/SPB_17.4/share/local/pcb/symbols 
           C:/Cadence/SPB_17.4/share/pcb/pcb_lib/symbols 
           C:/Cadence/SPB_17.4/share/pcb/allegrolib/symbols 
           C:/Cadence/SPB_17.4/share/pcb/pcb_lib/symbols 

PADPATH =  . 
           symbols 
           .. 
           ../symbols 
           C:/Cadence/SPB_17.4/share/local/pcb/padstacks 
           C:/Cadence/SPB_17.4/share/pcb/pcb_lib/symbols 
           C:/Cadence/SPB_17.4/share/pcb/allegrolib/symbols 
           C:/Cadence/SPB_17.4/share/pcb/pcb_lib/symbols 


------ Summary Statistics ------


#10  Run stopped because errors were detected

netrev run on Dec 14 18:54:25 2021
   DESIGN NAME : '70055R2'
   PACKAGING ON Nov  2 2021 14:32:04

   COMPILE 'logic'
   CHECK_PIN_NAMES OFF
   CROSS_REFERENCE OFF
   FEEDBACK OFF
   INCREMENTAL OFF
   INTERFACE_TYPE PHYSICAL
   MAX_ERRORS 500
   MERGE_MINIMUM 5
   NET_NAME_CHARS '#%&()*+-./:=>?@[]^_`|'
   NET_NAME_LENGTH 24
   OVERSIGHTS ON
   REPLACE_CHECK OFF
   SINGLE_NODE_NETS ON
   SPLIT_MINIMUM 0
   SUPPRESS   20
   WARNINGS ON

 10 errors detected
 No oversight detected
 No warning detected

cpu time      0:00:27
elapsed time  0:00:00