ic

Jasper Formal Fundamentals 2403 Course for Starting Formal Verification

The course "Jasper Formal Fundamentals v24.03" introduces formal analysis to those who want to use formal analysis for design or verification. 

To optimally benefit from this course, you must already have sufficient knowledge of the System Verilog assertions to be capable of writing properties for formal verification. Hence, this training provides a module on formal analysis to help cover this essential background. 

In this course, you will learn how to code efficient SVA Properties for formal analysis, understand formal complexity and how to overcome it, and learn the basics of formal coverage.

After completing this course, you will be able to:

  • Define reusable, functionally correct SVA properties that are efficient for formal tools. These shall use abstract auxiliary code to simplify descriptions, make code maintenance easier, reduce debug time, and reduce tool-proof runtime.
  • Set up, run, and analyze results from formal analysis.
  • Identify designs upon which formal is likely to be successful while understanding formal complexity issues and how to identify and overcome them.
  • Use a systematic property development process to approach a completely new verification problem.
  • Understand the basics of formal coverage.

 The most recently updated release includes new modules on:

  • "Basic complexity handling" which discusses the complexity in formal and how to identify and handle them.
  • "Complexity reduction methods” which discusses the complexity reduction methods and which is suitable for which type of complexity problem.
  • “Coverage in formal” which discusses the basics of coverage in formal verification and how coverage can be used in formal.   

Take this course to learn the basics of formal verification. 

What's Next? 

You can check out the complete training: Jasper Formal Fundamentals. There is a free online version of the training available 24/7 for all customers with a Cadence Learning and Support Portal account. If you are interested in an instructor-led version of the training, please contact Cadence Training. And don't forget to obtain your digital badge after completing the training!

You can also check Jasper University page for more materials on formal analysis and Jasper apps. 

Related Trainings 

Jasper Formal Expert Training Course | Cadence

Verilog Language and Application Training Course | Cadence

SystemVerilog for Design and Verification Training Course | Cadence

SystemVerilog Assertions Training Course | Cadence

Related Training Bytes 

Jasper Formal Property Verification (FPV) App: Basic Usage Demo (Video)

Jasper Formal Methodology playlist

Related Training Blogs

It’s the Digital Era; Why Not Showcase Your Brand Through a Digital Badge!

Training Insights: Introducing the C++ Course for All Your C++ Learning Needs!

Training Insights: Reaching Your Verification Closure Using Verisium Manager

Training Insights - Free Online Courses on Cadence Learning and Support Portal




ic

Deferrable Memory Write Usage and Verification Challenges

The application of real-time data processing or responsiveness is crucial, such as in high-performance computing, data centers, or applications requiring low-latency data transfers. It enables efficient use of PCIe bandwidth and resources by intelligently managing memory write operations based on system dynamics and workload priorities. By effectively leveraging Deferrable Memory Write [DMWr], Devices can achieve optimized performance and responsiveness, aligning with the evolving demands of modern computing applications.

What Is Deferrable Memory Write?

Deferrable Memory Write (DMWr) ECN introduced this new memory transaction type, which was later officially incorporated in PCIe 5.0 to CXL2.0. This enhanced type of memory transaction is Deferrable Memory Write [DMWr], which flows as another type of existing Read/Write memory transaction; the major difference of this Deferrable Memory Write, where the Requester attempts to write to a given location in Memory Space using the non-posted DMWr TLP Type, it Postponing their completion of memory write transactions to improve overall system efficiency and performance, those memory write operation can be delay or deferred until other priority task complete.

The Deferrable Memory Write (DMWr) requires the Completer to return an acknowledgment to the Requester and provides a mechanism for the recipient to defer (temporarily refuse to service) the Request.

DMWr provides a mechanism for Endpoints and hosts to choose to carry out or defer incoming DMWr Requests. This mechanism can be used by Endpoints and Hosts to simplify the design of flow control, reduce latency, and improve throughput. The Deferrable Memory writes TLP format in Figure A.

 

(Fig A) Deferrable Memory writes TLP format.

Example Scenario

Here's how the DMWr works with a simplified example: Imagine a system with an endpoint device (Device A) and a host CPU (Device B). Device B wants to write data to Device A's memory, but due to varying reasons such as system bus congestion or prioritization of other transactions, Device A can defer the completion of the memory write request. Just follow these steps:

  1. Initiation of Memory Write: Device B initiates a memory write transaction to Device A. This involves sending the memory write request along with the data payload over the PCIe physical layer link.
  2. Acknowledgment and Deferral: Upon receiving the memory write request, Device A acknowledges the transaction but may decide to defer its completion. Device A sends an acknowledgment (ACK) back to Device B, indicating it has received the data and intends to complete the write operation but not immediately.
  3. Deferred Completion: Device A defers the completion of the memory write operation to a later, more opportune time. This deferral allows Device A to prioritize other transactions or optimize the use of system resources, such as memory bandwidth or processor availability.
  4. Completion and Response: At a later point, Device A completes the deferred memory write operation and sends a completion indication back to Device B. This completion typically includes any status updates or additional information related to the transaction.

Usage or Importance of DMWr

Deferrable Memory Write usage provides the improvement in the following aspects:

  • Reduced Latency: By deferring less critical memory write operations, more critical transactions can be processed with lower latency, improving overall system responsiveness.
  • Improved Efficiency: Optimizes the utilization of system resources such as memory bandwidth and CPU cycles, enhancing the efficiency of data transfers within the PCIe architecture.
  • Enhanced Performance: Allows devices to manage and prioritize transactions dynamically, potentially increasing overall system throughput and reducing contention.

Challenges in the Implementation of DMWr Transactions

The implementation of deferrable memory writes (DMWr) introduces several advancements and challenges in terms of usage and verification:

  1. Timing and Synchronization: DMWr allows transactions to be deferred, complicating timing requirements or completing them within acceptable timing windows to avoid protocol violations. Ensuring proper synchronization between devices becomes critical to prevent data loss or corruption.
  2. Protocol Compliance: Verification must ensure compliance with ECN PCIe 6.0 and CXL specifications regarding when and how DMWr transactions can be initiated and completed.
  3. Performance Optimization: While DMWr can improve overall system performance by reducing latency, verifying its impact on system performance and ensuring it meets expected benchmarks is crucial.
  4. Error Handling: Handling errors related to deferred transactions adds complexity. Verifying error detection and recovery mechanisms under various scenarios (e.g., timeout during deferral) is essential.

Verification Challenges of DMWr Transactions

The challenges to verifying the DMWr transaction consist of all checks with respect to Function, Timing, Protocol compliance, improvement, Error scenario, and security usage on purpose, as well as Data integrity at the PCIe and CXL.

  1. Functional Verification: Verifying the correct implementation of DMWr at both ends of the PCIe link (transmitter and receiver) to ensure proper functionality and adherence to specifications.
  2. Timing Verification: Validating timing constraints associated with deferring writes and ensuring transactions are completed within specified windows without violating protocol rules.
  3. Protocol Compliance Verification: Checking that DMWr transactions adhere to PCIe and CXL protocol rules, including ordering rules and any restrictions on deferral based on the transaction type.
  4. Performance Verification: Assessing the impact of DMWr on overall system performance, including latency reduction and bandwidth utilization, through simulation and testing.
  5. Error Scenario Verification: Creating and testing scenarios to verify error handling mechanisms related to DMWr, such as timeouts, retries, and recovery procedures.
  6. Security Considerations: Assessing potential security vulnerabilities related to DMWr, such as data integrity risks during deferred transactions or exposure to timing-based attacks.

Major verification challenges and approaches are timing and synchronization verification in the context of implementing deferrable memory writes (DMWr), which is crucial due to the inherent complexities introduced by deferred transactions. Here are the key issues and approaches to address them:

Timing and Synchronization Issues

  1. Transaction Completion Timing:
    • Issue: Ensuring deferred transactions are completed within the specified time window without violating protocol timing constraints.
    • Approach: Design an internal timer and checker to model worst-case scenarios where transactions are deferred and verify that they are complete within allowable latency limits. This involves simulating various traffic loads and conditions to assess timing under different scenarios.
  2. Ordering and Dependencies:
    • Issue: Verifying that transactions deferred using DMWr maintain the correct ordering and dependencies relative to non-deferred transactions.
    • Approach: Implement test scenarios that include mixed traffic of DMWr and non-DMWr transactions. Verify through simulation or emulation that dependencies and ordering requirements are correctly maintained across the PCIe link.
  3. Interrupt Handling and Response Times:
    • Issue: Verify the handling of interrupts and ensure timely responses from devices involved in DMWr transactions.
    • Approach: Implement test cases that simulate interrupt generation during DMWr transactions. Measure and verify the response times to interrupts to ensure they meet system latency requirements.

In conclusion, while deferrable memory writes in PCIe and CXL offer significant performance benefits, their implementation and verification present several challenges related to timing, protocol compliance, performance optimization, and error handling. Addressing these challenges requires rigorous testing and testbench of traffic, advanced verification methodologies, and a thorough understanding of PCIe specifications and also the motivation behind introducing this Deferrable Write is effectively used in the CXL further. Outcomes of Deferrable Memory Write verify that the performance benefits of DMWr (reduced latency, improved throughput) are achieved without compromising timing integrity or violating protocol specifications.

In summary, PCIe and CXL are complex protocols with many verification challenges. You must understand many new Spec changes and consider the robust verification plan for the new features and backward compatible tests impacted by new features. Cadence's PCIe 6.0 Verification IP is fully compliant with the latest PCIe Express 6.0 specifications and provides an effective and efficient way to verify the components interfacing with the PCIe 6.0 interface. Cadence VIP for PCIe 6.0 provides exhaustive verification of PCIe-based IP and SoCs, and we are working with Early Adopter customers to speed up every verification stage.

More Information




ic

Randomization considerations for PCIe Integrity and Data Encryption Verification Challenges

Peripheral Component Interconnect Express (PCIe) is a high-speed interface standard widely used for connecting processors, memory, and peripherals. With the increasing reliance on PCIe to handle sensitive data and critical high-speed data transfer, ensuring data integrity and encryption during verification is the most essential goal. As we know, in the field of verification, randomization is a key technique that drives robust PCIe verification. It introduces unpredictability to simulate real-world conditions and uncover hidden bugs from the design. This blog examines the significance of randomization in PCIe IDE verification, focusing on how it ensures data integrity and encryption reliability, while also highlighting the unique challenges it presents. For more relevant details and understanding on PCIe IDE you can refer to Introducing PCIe's Integrity and Data Encryption Feature . The Importance of Data Integrity and Data Encryption in PCIe Devices Data Integrity : Ensures that the transmitted data arrives unchanged from source to destination. Even minor corruption in data packets can compromise system reliability, making integrity a critical aspect of PCIe verification. Data Encryption : Protects sensitive data from unauthorized access during transmission. Encryption in PCIe follows a standard to secure information while operating at high speeds. Maintaining both data integrity and data encryption at PCIe’s high-speed data transfer rate of 64GT/s in PCIe 6.0 and 128GT/s in PCIe 7.0 is essential for all end point devices. However, validating these mechanisms requires comprehensive testing and verification methodologies, which is where randomization plays a very crucial role. You can refer to Why IDE Security Technology for PCIe and CXL? for more details on this. Randomization in PCIe Verification Randomization refers to the generation of test scenarios with unpredictable inputs and conditions to expose corner cases. In PCIe verification, this technique helps us to ensure that all possible behaviors are tested, including rare or unexpected situations that could cause data corruption or encryption failures that may cause serious hindrances later. So, for PCIe IDE verification, we are considering the randomization that helps us verify behavior more efficiently. Randomization for Data Integrity Verification Here are some approaches of randomized verifications that mimic real-world traffic conditions, uncovering subtle integrity issues that might not surface in normal verification methods. 1. Randomized Packet Injection: This technique randomized data packets and injected into the communication stream between devices. Here we Inject random, malformed, or out-of-sequence packets into the PCIe link and mix valid and invalid IDE-encrypted packets to check the system’s ability to detect and reject unauthorized or invalid packets. Checking if encryption/decryption occurs correctly across packets. On verifying, we check if the system logs proper errors or alerts when encountering invalid packets. It ensures coverage of different data paths and robust protocol check. This technique helps assess the resilience of the IDE feature in PCIe in below terms: (i) Data corruption: Detecting if the system can maintain data integrity. (ii) Encryption failures: Testing the robustness of the encryption under random data injection. (iii) Packet ordering errors: Ensuring reordering does not affect data delivery. 2. Random Errors and Fault Injection: It involves simulating random bit flips, PCRC errors, or protocol violations to help validate the robustness of error detection and correction mechanisms of PCIe. These techniques help assess how well the PCIe IDE implementation: (i) Detects and responds to unexpected errors. (ii) Maintains secure communication under stress. (iii) Follows the PCIe error recovery and reporting mechanisms (AER – Advanced Error Reporting). (iv) Ensures encryption and decryption states stay synchronized across endpoints. 3. Traffic Pattern Randomization: Randomizing the sequence, size, and timing of data packets helps test how the device maintains data integrity under heavy, unpredictable traffic loads. Randomization for Data Encryption Verification Encryption adds complexity to verification, as encrypted data streams are not readable for traditional checks. Randomization becomes essential to test how encryption behaves under different scenarios. Randomization in data encryption verification ensures that vulnerabilities, such as key reuse or predictable patterns, are identified and mitigated. 1. Random Encryption Keys and Payloads: Randomly varying keys and payloads help validate the correctness of encryption without hardcoding assumptions. This ensures that encryption logic behaves correctly across all possible inputs. 2. Randomized Initialization Vectors (IVs): Many encryption protocols require a unique IV for each transaction. Randomized IVs ensure that encryption does not repeat patterns. To understand the IDE Key management flow, we can follow the below diagram that illustrates a detailed example key programming flow using the IDE_KM protocol. Figure 1: IDE_KM Example As Figure 1 shows, the functionality of the IDE_KM protocol involves Start of IDE_KM Session, Device Capability Discovery, Key Request from the Host, Key Programming to PCIe Device, and Key Acknowledgment. First, the Host starts the IDE_KM session by detecting the presence of the PCIe devices; if the device supports the IDE protocol, the system continues with the key programming process. Then a query occurs to discover the device’s encryption capabilities; it ensures whether the device supports dynamic key updates or static keys. Then the host sends a request to the Key Management Entity to obtain a key suitable for the devices. Once the key is obtained, the host programs the key into the IDE Controller on the PCIe endpoint. Both the host and the device now share the same key to encrypt and authenticate traffic. The device acknowledges that it has received and successfully installed the encryption key and the acknowledgment message is sent back to the host. Once both the host and the PCIe endpoint are configured with the key, a secure communication channel is established. From this point, all data transmitted over the PCIe link is encrypted to maintain confidentiality and integrity. IDE_KM plays a crucial role in distributing keys in a secure manner and maintaining encryption and integrity for PCIe transactions. This key programming flow ensures that a secure communication channel is established between the host and the PCIe device. Hence, the Randomized key approach ensures that the encryption does not repeat patterns. 3. Randomization PHE: Partial Header Encryption (PHE) is an additional mechanism added to Integrity and Data Encryption (IDE) in PCIe 6.0. PHE validation using a variety of traffic; incorporating randomization in APIs provided for validating PHE feature can add more robust Encryption to the data. Partial Header Encryption in Integrity and Data Encryption for PCIe has more detailed information on this. Figure 2: High-Level Flow for Partial Header Encryption 4. Randomization on IDE Address Association Register values: IDE Address Association Register 1/2/3 are supposed to be configured considering the memory address range of IDE partner ports. The fields of IDE address registers are split multiple values such as Memory Base Lower, Memory Limit Lower, Memory Base Upper, and Memory Limit Upper. IDE implementation can have multiple register blocks considering addresses with 32 or 64, different registers sizes, 0-255 selective streams, 0-15 address blocks, etc. This Randomization verification can help verify all the corner cases. Please refer to Figure 2. Figure 3: IDE Address Association Register 5. Random Faults During Encryption: Injecting random faults (e.g., dropped packets or timing mismatches) ensures the system can handle disruptions and prevent data leakage. Challenges of IDE Randomization and its Solution Randomization introduces a vast number of scenarios, making it computationally intensive to simulate every possibility. Constrained randomization limits random inputs to valid ranges while still covering edge cases. Again, using coverage-driven verification to ensure critical scenarios are tested without excessive redundancy. Verifying encrypted data with random inputs increases complexity. Encryption masks data, making it hard to verify outputs without compromising security. Here we can implement various IDE checks on the IDE callback to analyze encrypted traffic without decrypting it. Randomization can trigger unexpected failures, which are often difficult to reproduce. By using seed-based randomization, a specific seed generates a repeatable random sequence. This helps in reproducing and analyzing the behavior more precisely. Conclusion Randomization is a powerful technique in PCIe verification, ensuring robust validation of both data integrity and data encryption. It helps us to uncover subtle bugs and edge cases that a non-randomized testing might miss. In Cadence PCIe VIP, we support full-fledged IDE Verification with rigorous randomized verification that ensures data integrity. Robust and reliable encryption mechanisms ensure secure and efficient data communication. However, randomization also brings various challenges, and to overcome them we adopt a combination of constrained randomization, seed-based testing, and coverage-driven verification. As PCIe continues to evolve with higher speeds and focuses on high security demands, our Cadence PCIe VIP ensures it is in line with industry demand and verify high-performance systems that safeguard data in real-world environments with excellence. For more information, you can refer to Verification of Integrity and Data Encryption(IDE) for PCIe Devices and Industry's First Adopted VIP for PCIe 7.0 . More Information: For more info on how Cadence PCIe Verification IP and TripleCheck VIP enables users to confidently verify IDE, see our VIP for PCI Express , VIP for Compute Express Link for and TripleCheck for PCI Express For more information on PCIe in general, and on the various PCI standards, see the PCI-SIG website .




ic

Using oscillograph waveform file CSV as the Pspice simulation signal source

hi,

     I save the waveform file of the oscilloscope as CSV file format.

     Now, I need to use this waveform file as the source of the low-pass filter .

     I searched and read the PSPICE help documents, and did not find any  methods. 

     How to realize it?

     Are there any reference documents or examples?

     Thanks!

    




ic

LM117 Spice Model

I am looking for LM117 Pspice model. Can someone send me the file. Thank you




ic

Arduino: how to save the dynamic memory?

When the Arduino Mega2560 is added to the first serial port, the dynamic memory is 2000 bytes, and when the second serial serial is added, the dynamic memory is 4000 bytes. Now I need to add the third Serial serial port. The dynamic memory is 6000 bytes. Due to the many variables in the program itself, the dynamic memory is not enough. Please help me how to save the dynamic memory?




ic

Matlab cannot open Pspice, to prompt orCEFSimpleUI.exe that it has stopped working!

Cadence_SPB_17.4-2019 + Matlab R2019a

请参考本文档中的步骤进行操作

1,打开BJT_AMP.opj

2,设置Matlab路径

3,打开BJT_AMP_SLPS.slx

4,打开后,设置PSpiceBlock,出现或CEFSimpleUI.exe停止工作

5,添加模块

6,相同

7,打开pspsim.slx

8,相同

9,打开C: Cadence Cadence_SPB_17.4-2019 tools bin

orCEFSimpleUI.exe和orCEFSimple.exe

 

10,相同

我想问一下如何解决,非常感谢!




ic

Formal Verification Approach for I2C Slave

Hello,

I am new in formal verification and I have a concept question about how to verify an I2C Slave block.

I think the response should be valid for any serial interface which needs to receive information for several clocks before making an action.

The the protocol description is the following: 

I have a serial clock (SCL), Serial Data Input (SDI) and Serial Data Output (SDO), all are ports of the I2C Slave block.

The protocol looks like this:

The first byte which is received by the slave consists in 7bits of sensor address and the 8th bit is the command 0/1 Write/Read.

After the first 8 bits, the slave sends an ACK (SDO = 1 for 1 clock) if the sensor address is correct.

Lets consider only this case, where I want to verify that the slave responds with an ACK if the sensor address is correct.

The only solution I found so far was to use the internal buffer from the block which saves the received bits during 8 clocks. The signal is called shift_s.

I also needed to use internal chip state (state_s) and an internal counter (shift_count_s).

Instead of doing an direct check of the SDO(sdo_o) depending on SDI (sdi_d_i), I used the internal shift_s register.

My question is if my approach is the correct one or there is a possibility to write the verification at a blackbox level.

Below you have the 2 properties: first checks connection from SDI to internal buffer, the second checks the connection between internal buffer and output.

property prop_i2c_sdi_store;
  @(posedge sclk_n_i)
  $past(i2c_bl.state_s == `STATE_RECEIVE_I2C_ADDR)
    |-> i2c_bl.shift_s == byte'({ $past(i2c_bl.shift_s), $past(sdi_d_i)});
endproperty
APF_I2C_CHECK_SDI_STORE: assert property(prop_i2c_sdi_store);

property prop_i2c_sensor_addr(sens_addr_sel, sens_addr);
@(posedge sclk_n_i) (i2c_bl.state_s == `STATE_RECEIVE_I2C_ADDR) && (i2c_addr_i == sens_addr_sel) && (i2c_bl.shift_count_s == 7)
  ##1 (i2c_bl.shift_s inside {sens_addr, sens_addr+1}) |-> sdo_o;
endproperty
APF_I2C_CHECK_SENSOR_ADDR0: assert property(prop_i2c_sensor_addr(0, `I2C_SENSOR_ADDRESS_A0));
APF_I2C_CHECK_SENSOR_ADDR1: assert property(prop_i2c_sensor_addr(1, `I2C_SENSOR_ADDRESS_A1));
APF_I2C_CHECK_SENSOR_ADDR2: assert property(prop_i2c_sensor_addr(2, `I2C_SENSOR_ADDRESS_A2));
APF_I2C_CHECK_SENSOR_ADDR3: assert property(prop_i2c_sensor_addr(3, `I2C_SENSOR_ADDRESS_A3));

PS: i2c_addr_i is address selection for the slave (there are 4 configurable sensor addresses, but this is not important for the case).

Thank you!




ic

Can't request Tensilica SDK - Error 500

Hi,

I'm looking to download Tensilica SDK for evaluation, but I can't get past the registration form:




ic

Here Is Why the Indian Voter Is Saddled With Bad Economics

This is the 15th installment of The Rationalist, my column for the Times of India.

It’s election season, and promises are raining down on voters like rose petals on naïve newlyweds. Earlier this week, the Congress party announced a minimum income guarantee for the poor. This Friday, the Modi government released a budget full of sops. As the days go by, the promises will get bolder, and you might feel important that so much attention is being given to you. Well, the joke is on you.

Every election, HL Mencken once said, is “an advance auction sale of stolen goods.” A bunch of competing mafias fight to rule over you for the next five years. You decide who wins, on the basis of who can bribe you better with your own money. This is an absurd situation, which I tried to express in a limerick I wrote for this page a couple of years ago:

POLITICS: A neta who loves currency notes/ Told me what his line of work denotes./ ‘It is kind of funny./ We steal people’s money/And use some of it to buy their votes.’

We’re the dupes here, and we pay far more to keep this circus going than this circus costs. It would be okay if the parties, once they came to power, provided good governance. But voters have given up on that, and now only want patronage and handouts. That leads to one of the biggest problems in Indian politics: We are stuck in an equilibrium where all good politics is bad economics, and vice versa.

For example, the minimum guarantee for the poor is good politics, because the optics are great. It’s basically Garibi Hatao: that slogan made Indira Gandhi a political juggernaut in the 1970s, at the same time that she unleashed a series of economic policies that kept millions of people in garibi for decades longer than they should have been.

This time, the Congress has released no details, and keeping it vague makes sense because I find it hard to see how it can make economic sense. Depending on how they define ‘poor’, how much income they offer and what the cost is, the plan will either be ineffective or unworkable.

The Modi government’s interim budget announced a handout for poor farmers that seemed rather pointless. Given our agricultural distress, offering a poor farmer 500 bucks a month seems almost like mockery.

Such condescending handouts solve nothing. The poor want jobs and opportunities. Those come with growth, which requires structural reforms. Structural reforms don’t sound sexy as election promises. Handouts do.

A classic example is farm loan waivers. We have reached a stage in our politics where every party has to promise them to assuage farmers, who are a strong vote bank everywhere. You can’t blame farmers for wanting them – they are a necessary anaesthetic. But no government has yet made a serious attempt at tackling the root causes of our agricultural crisis.

Why is it that Good Politics in India is always Bad Economics? Let me put forth some possible reasons. One, voters tend to think in zero-sum ways, as if the pie is fixed, and the only way to bring people out of poverty is to redistribute. The truth is that trade is a positive-sum game, and nations can only be lifted out of poverty when the whole pie grows. But this is unintuitive.

Two, Indian politics revolves around identity and patronage. The spoils of power are limited – that is indeed a zero-sum game – so you’re likely to vote for whoever can look after the interests of your in-group rather than care about the economy as a whole.

Three, voters tend to stay uninformed for good reasons, because of what Public Choice economists call Rational Ignorance. A single vote is unlikely to make a difference in an election, so why put in the effort to understand the nuances of economics and governance? Just ask, what is in it for me, and go with whatever seems to be the best answer.

Four, Politicians have a short-term horizon, geared towards winning the next election. A good policy that may take years to play out is unattractive. A policy that will win them votes in the short term is preferable.

Sadly, no Indian party has shown a willingness to aim for the long term. The Congress has produced new Gandhis, but not new ideas. And while the BJP did make some solid promises in 2014, they did not walk that talk, and have proved to be, as Arun Shourie once called them, UPA + Cow. Even the Congress is adopting the cow, in fact, so maybe the BJP will add Temple to that mix?

Benjamin Franklin once said, “Democracy is two wolves and a lamb voting on what to have for lunch.” This election season, my friends, the people of India are on the menu. You have been deveined and deboned, marinated with rhetoric, seasoned with narrative – now enter the oven and vote.

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.




ic

Farmers, Technology and Freedom of Choice: A Tale of Two Satyagrahas

This is the 23rd installment of The Rationalist, my column for the Times of India.

I had a strange dream last night. I dreamt that the government had passed a law that made using laptops illegal. I would have to write this column by hand. I would also have to leave my home in Mumbai to deliver it in person to my editor in Delhi. I woke up trembling and angry – and realised how Indian farmers feel every single day of their lives.

My column today is a tale of two satyagrahas. Both involve farmers, technology and the freedom of choice. One of them began this month – but first, let us go back to the turn of the millennium.

As the 1990s came to an end, cotton farmers across India were in distress. Pests known as bollworms were ravaging crops across the country. Farmers had to use increasing amounts of pesticide to keep them at bay. The costs of the pesticide and the amount of labour involved made it unviable – and often, the crops would fail anyway.

Then, technology came to the rescue. The farmers heard of Bt Cotton, a genetically modified type of cotton that kept these pests away, and was being used around the world. But they were illegal in India, even though no bad effects had ever been recorded. Well, who cares about ‘illegal’ when it is a matter of life and death?

Farmers in Gujarat got hold of Bt Cotton seeds from the black market and planted them. You’ll never guess what happened next. As 2002 began, all cotton crops in Gujarat failed – except the 10,000 hectares that had Bt Cotton. The government did not care about the failed crops. They cared about the ‘illegal’ ones. They ordered all the Bt Cotton crops to be destroyed.

It was time for a satyagraha – and not just in Gujarat. The late Sharad Joshi, leader of the Shetkari Sanghatana in Maharashtra, took around 10,000 farmers to Gujarat to stand with their fellows there. They sat in the fields of Bt Cotton and basically said, ‘Over our dead bodies.’ ¬Joshi’s point was simple: all other citizens of India have access to the latest technology from all over. They are all empowered with choice. Why should farmers be held back?

The satyagraha was successful. The ban on Bt Cotton was lifted.

There are three things I would like to point out here. One, the lifting of the ban transformed cotton farming in India. Over 90% of Indian farmers now use Bt Cotton. India has become the world’s largest producer of cotton, moving ahead of China. According to agriculture expert Ashok Gulati, India has gained US$ 67 billion in the years since from higher exports and import savings because of Bt Cotton. Most importantly, cotton farmers’ incomes have doubled.

Two, GMO crops have become standard across the world. Around 190 million hectares of GMO crops have been planted worldwide, and GMO foods are accepted in 67 countries. The humanitarian benefits have been massive: Golden Rice, a variety of rice packed with minerals and vitamins, has prevented blindness in countless new-born kids since it was introduced in the Philippines.

Three, despite the fear-mongering of some NGOs, whose existence depends on alarmism, the science behind GMO is settled. No harmful side effects have been noted in all these years, and millions of lives impacted positively. A couple of years ago, over 100 Nobel Laureates signed a petition asserting that GMO foods were safe, and blasting anti-science NGOs that stood in the way of progress. There is scientific consensus on this.

The science may be settled, but the politics is not. The government still bans some types of GMO seeds, such as Bt Brinjal, which was developed by an Indian company called Mahyco, and used successfully in Bangladesh. More crucially, a variety called HT Bt Cotton, which fights weeds, is also banned. Weeding takes up to 15% of a farmer’s time, and often makes farming unviable. Farmers across the world use this variant – 60% of global cotton crops are HT Bt. Indian farmers are so desperate for it that they choose to break the law and buy expensive seeds from the black market – but the government is cracking down. A farmer in Haryana had his crop destroyed by the government in May.

On June 10 this year, a farmer named Lalit Bahale in the Akola District of Maharashtra kicked off a satyagraha by planting banned seeds of HT Bt Cotton and Bt Brinjal. He was soon joined by thousands of farmers. Far from our urban eyes, a heroic fight has begun. Our farmers, already victimised and oppressed by a predatory government in countless ways, are fighting for their right to take charge of their lives.

As this brave struggle unfolds, I am left with a troubling question: All those satyagrahas of the past by our great freedom fighters, what were they for, if all they got us was independence and not freedom?

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.




ic

For this Brave New World of cricket, we have IPL and England to thank

This is the 24th installment of The Rationalist, my column for the Times of India.

Back in the last decade, I was a cricket journalist for a few years. Then, around 12 years ago, I quit. I was jaded as hell. Every game seemed like déjà vu, nothing new, just another round on the treadmill. Although I would remember her fondly, I thought me and cricket were done.

And then I fell in love again. Cricket has changed in the last few years in glorious ways. There have been new ways of thinking about the game. There have been new ways of playing the game. Every season, new kinds of drama form, new nuances spring up into sight. This is true even of what had once seemed the dullest form of the game, one-day cricket. We are entering into a brave new world, and the team leading us there is England. No matter what happens in the World Cup final today – a single game involves a huge amount of luck – this England side are extraordinary. They are the bridge between eras, leading us into a Golden Age of Cricket.

I know that sounds hyperbolic, so let me stun you further by saying that I give the IPL credit for this. And now, having woken up you up with such a jolt on this lovely Sunday morning, let me explain.

Twenty20 cricket changed the game in two fundamental ways. Both ended up changing one-day cricket. The first was strategy.

When the first T20 games took place, teams applied an ODI template to innings-building: pinch-hit, build, slog. But this was not an optimal approach. In ODIs, teams have 11 players over 50 overs. In T20s, they have 11 players over 20 overs. The equation between resources and constraints is different. This means that the cost of a wicket goes down, and the cost of a dot ball goes up. Critically, it means that the value of aggression rises. A team need not follow the ODI template. In some instances, attacking for all 20 overs – or as I call it, ‘frontloading’ – may be optimal.

West Indies won the T20 World Cup in 2016 by doing just this, and England played similarly. And some sides began to realise was that they had been underestimating the value of aggression in one-day cricket as well.

The second fundamental way in which T20 cricket changed cricket was in terms of skills. The IPL and other leagues brought big money into the game. This changed incentives for budding cricketers. Relatively few people break into Test or ODI cricket, and play for their countries. A much wider pool can aspire to play T20 cricket – which also provides much more money. So it makes sense to spend the hundreds of hours you are in the nets honing T20 skills rather than Test match skills. Go to any nets practice, and you will find many more kids practising innovative aggressive strokes than playing the forward defensive.

As a result, batsmen today have a wider array of attacking strokes than earlier generations. Because every run counts more in T20 cricket, the standard of fielding has also shot up. And bowlers have also reacted to this by expanding their arsenal of tricks. Everyone has had to lift their game.

In one-day cricket, thus, two things have happened. One, there is better strategic understanding about the value of aggression. Two, batsmen are better equipped to act on the aggressive imperative. The game has continued to evolve.

Bowlers have reacted to this with greater aggression on their part, and this ongoing dialogue has been fascinating. The cricket writer Gideon Haigh once told me on my podcast that the 2015 World Cup featured a battle between T20 batting and Test match bowling.

This England team is the high watermark so far. Their aggression does not come from slogging. They bat with a combination of intent and skills that allows them to coast at 6-an-over, without needing to take too many risks. In normal conditions, thus, they can coast to 300 – any hitting they do beyond that is the bonus that takes them to 350 or 400. It’s a whole new level, illustrated by the fact that at one point a few days ago, they had seven consecutive scores of 300 to their name. Look at their scores over the last few years, in fact, and it is clear that this is the greatest batting side in the history of one-day cricket – by a margin.

There have been stumbles in this World Cup, but in the bigger picture, those are outliers. If England have a bad day in the final and New Zealand play their A-game, England might even lose today. But if Captain Morgan’s men play their A-game, they will coast to victory. New Zealand does not have those gears. No other team in the world does – for now.

But one day, they will all have to learn to play like this.

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.





ic

Start Your Engines: Optimizing Mixed-Signal Simulation Efficiency

During a mixed-signal simulation, the analog engine usually dominates the simulation time and resources. If you need to run only the analog engine in several windows, or if you would like to to run multiple tests of the same circuit with different stimuli or test pattern, then you need to run the simulation multiple times. View this blog to know more about the the two advanced technologies that Spectre AMS Designer provides to help you improve the efficiency of your mixed-signal designs and to increase the simulation speed.(read more)




ic

Start Your Engines: Create and Insert Connect Modules for Mixed-Signal Verification

Read this blog to know how you can easily create and insert connect modules using Spectre AMS Designer with the Verilog-AMS standard language defined by Accellera. (read more)




ic

Knowledge Booster Training Bytes - Writing Physical Verification Language Rules

Have you ever wanted to write a DRC rule deck to check for space or width constraints on polygons? Or have you wondered how the multiple lines of an LVS rule deck extract and conduct a comparison between the schematic and layout? Maybe you've been curious about the role of rule deck writers in creating high-quality designs ready for tape-out.

If any of these questions interest you, there is good news: the latest version (v23.1) of the Physical Verification Rules Writer (PVLRW) course is designed to teach you rule deck writing. This free 16-hour online course includes audio and labs designed to make your learning experience comfortable and flexible. Whether you are new to the concept or an experienced CAD/PDK engineer, the course is structured to enhance your rule deck writing skills.

The PVLRW course covers six core modules: Layer Processing, DRC Rules, Layout Extraction, ERC and LVS Rules, Schematic Netlisting, and Coloring Rules. There are also three optional appendix sections. Each module explains relevant rules with syntax, concepts, graphics, examples, and case studies.

This course is based on tool versions PEGASUS231 and Virtuoso Studio IC231.

Pegasus Input and Output

Pegasus is a cloud-ready physical verification signoff solution that enables engineers to support faster delivery of advanced-node integrated circuits (ICs) to market.

Pegasus requires input data in the form of layout geometry, schematic netlists, and rules that direct the tool operation. The rules fall into two categories: those that describe the fabrication process and those that control the job-specific operation.

Pegasus provides log and report files, netlists, databases, and error databases as output.

Overview of Pegasus Rule File

The rule decks written in Physical Verification Language (PVL) work for the Cadence PV signoff tools Pegasus and PVS (Physical Verification System).   

The PVL rules are placed in a file that gets selected in a run from the GUI or the command line, as the user directs. PVL rules may be on separate lines within the file and can also be contained in named rule blocks.

Each line of code starts with a PVL rule that uses prefix type notation. It consists of a keyword followed by options, input layer or variable names, and output layer or variable names.

A rule block has the format of the keyword rule, followed by a rule name you wish to give it, followed by an opening curly brace. You enter the rules you wish to perform, followed by a closing curly brace on the last separate line.

  Sample Rule deck with individual lines of code and rule blocks.

DRC Rules

The first step in a typical Pegasus flow is a Design Rule Check (DRC), which verifies that layout geometries conform to the minimum width, spacing, and other fabrication process rules required by an IC foundry. Each foundry specifies its own process-dependent rules that must be met by the layout design.

There are three types of DRC rules: layer definition rules, layer derivation rules, and DRC design check rules. Layer definition rules identify the layers contained in the input layout database, and layer derivation rules derive additional layers from the original input layers, allowing the tool to test the design against specific foundry requirements using the design check rules.

A sample DRC Rule deck

A layout view displaying the DRC violations

LVS Rules

The Pegasus Layout Versus Schematic (LVS) tool compares the layout netlist with the schematic netlist to check for discrepancies.

There are two essential LVS rule sets: LVS extraction rules and comparison rules. LVS extraction rules help extract drawn devices and connectivity information from the input layout geometry data and outputs into a layout netlist. The LVS extraction rule set also includes the layer definition, derivation, extraction, connectivity, and net listing rules.

LVS comparison rules are associated with comparing the extracted layout netlist to a schematic netlist.

A sample LVS Rule deck. 

TCL, Macros, and Conditional commands

Tcl is supported and used in various Pegasus functionalities, such as Pegasus rule files and Pegasus configurator. Macros are functional templates that are defined once and can be used multiple times in a rule file. Conditional Commands are used to process or skip specific commands in the rule file.

Do You Have Access to the Cadence Support Portal?

If not, follow the steps below to create your account.

  • On the Cadence Support portal, select Register Now and provide the requested information on the Registration page.
  • You will need an email address and host ID to sign up.
  • If you need help with registration, contact support@cadence.com.

To stay up to date with the latest news and information about Cadence training and webinars, subscribe to the Cadence Training emails.

If you have questions about courses, schedules, online, public, or live onsite training, reach out to us at Cadence Training.

For any questions, general feedback, or future blog topic suggestions, please leave a comment.

Related Resources

Product Manuals

Cadence Pegasus Developers Guide

Rapid Adoption Kits     Running Pegasus DRC/LVS/FILL in Batch Mode
Training Byte Videos

What Is the Run Command File?

How to Run PVS-Pegasus Jobs in GUI and Batch modes?

PVS DRC Run From - Setup Rules

What Is PVS/Pegasus Layer Viewer?

PVL Coloring Ruledecks with Docolor and Stitchcolor 

PLV Commands: dfm_property with Primary & Secondary Layer

PVS Quantus QRC Overview 

Online Courses

Pegasus Verification System

PVS (Physical Verification System)

Virtuoso Layout Design Basics

About Knowledge Booster Training Bytes

Knowledge Booster Training Bytes is an online journal that relays information about Cadence Training videos, online courses, and upcoming webinars in the Learning section of the Cadence Learning and Support portal. This blog category brings you direct links to these videos, courses, and other related material on a regular basis. Subscribe to receive email notifications about our latest Custom IC Design blog posts.





ic

Virtuoso Studio IC 23.1: Using Net Tracer for Design Review

This blog explores how Virtuoso Studio Net Tracer can help you perform a design review.

We’ll use the net connectivity option, which allows the user to get a clean highlighted net. You can use the Net Tracer tool to highlight the nets. You can find the Net Tracer command under the connectivity pulldown menu in the layout window.

Trace manager and the ability to display different islands on the same net with other colors, you can identify and connect the unconnected islands as you wish.

The Net Tracer utility traces the nets in the physical view (layout). The trace is a highlighted net, which is a non-selectable object. The Net Tracer utility is available from Virtuoso Layout Suite XL onwards. You can use this utility based on your specific needs and preferences.

For a better understanding of the Net Tracer feature, let’s see one scenario between the circuit designer and layout engineer for a layout design review.

Circuit designer: Can we go through the routed input nets “inm” and “inp”?

Layout engineer: From the below layout view where they are highlighted using the XL connectivity, today I will use Net Tracer utility for the design review.

Circuit designer: I have never heard of this feature. Let's see how it works.

Layout engineer: Sure, now we turn on the Net Tracer toolbar using the below option.

You see the Net Tracer options form here:

As you can see on my screen, I have opened the layout view and engaged the Net Tracer utility.

Net Tracer allows shapes to be traced on a net in two tracing modes, namely, physical and logical, where shapes on the same net are physically or logically connected.

Physical tracing gathers all the shapes physically connected on the same net.

Logical tracing gathers all the shapes assigned to the same net. It highlights the net as in the source design (schematic). It will highlight shapes on the same net, even if they are isolated shapes that are not physically connected.

For this scenario, let us use physical tracing for input nets “inm” and “inp."

Highlighted nets are shown below:

Net “inm”                    Net “inp”                   Nets “inm” and “inp” 

      

Net Tracer has features like physical and logical tracing, preview, step-by-step mode, ease of tracing a net on a shape out of multiple underlying shapes, and so on.

Let us explore logical tracing for output nets “outm” and “outp”:

Here, you can see how to enable true color and halo before enabling logical tracing to identify the metal route. After enabling the true color halo, enable the logical trace.

Here, I am opening the trace manager to search “outm” and “outp” and click trace. That will trace the particular nets as shown.

Net Tracer has a preview feature, which is helpful in terms of the number of previewed objects. This preview capability hints at how the trace would appear when you create it. This useful feature in Virtuoso Studio highlights both completed and incomplete nets, helping the user better understand the status of the highlighted nets.

Circuit designer: Thanks for the design review. You have done good work. Net Tracer clearly shows both types of tracing, and it was even easy for the circuit designer to understand.

Layout engineer: Let me share the link to the Net Tracer RAK, where other layout engineers can explore many more amazing features of the Net Tracer.

Do You Have Access to the Cadence Support Portal?

If not, follow the steps below to create your account.

  • On the Cadence Support portal, select Register Now and provide the requested information on the Registration page.
  • You will need an email address and host ID to sign up.
  • If you need help with registration, contact support@cadence.com.

To stay up to date with the latest news and information about Cadence training and webinars, subscribe to the Cadence Training emails.

If you have questions about courses, schedules, online, public, or live onsite training, reach out to us at Cadence Training.

For any questions, general feedback, or future blog topic suggestions, please leave a comment.

Become Cadence Certified

Cadence Training Services now offers digital badges for this training course. These badges indicate proficiency in a certain technology or skill and give you a way to validate your expertise to managers and potential employers. You can highlight your expertise by adding these digital badges to your email signature or any social media platform, such as Facebook or LinkedIn. To become Cadence Certified, you can find additional information here.

Related Resources

 Videos

Invoking the MarkNet, Net Tracer command and its options

Net Tracer Features

Video: Net Tracer saving and loading saved trace, neighboring shapes of trace

Net Tracer: Physical Tracing – Step mode

Net Tracer: Physical and Logical Tracing

Video: Net Tracer show preview option, from net and display options, shape count in trace

Video: Net Tracer using a constraint group with different display mode settings and  using the Trace Manager GUI

 RAK

Introduction to Net Tracer

 Product manual

Virtuoso Layout Suite XL: Connectivity Driven Editing User Guide IC23.1

About Knowledge Booster Training Bytes

Knowledge Booster Training Bytes is an online journal that relays information about Cadence Training videos, online courses, and upcoming webinars that are available in the Learning section of the Cadence Learning and Support portal. This blog category brings you direct links to these videos, courses, and other related material on a regular basis.

Sandhya.

On behalf of the Cadence Training team






ic

Purging duplicate vias in pcb editor

How do we purge/remove the duplicated vias in the same location of the PCB editor? These vias are not the one stacked and they are just blind vias running in internal layers 12-14. I find there is an additional copy of the blind via at the same location. Not sure what caused this issue.




ic

"net logic" question

hello:

i use the command "net logic" to change/assign the net name of the pins but the command can be used for only one pin at a time.

is there a way to change/assign the same net name on 100 pins all at once?

i have a daisy chain design so i need to assign one net name for 100s of pins.

========

thank you david, i was able to do it.

i am writing this section because i can not reply to your comment.




ic

Display Resource Editor: Different Colors for Schematic and Layout Axis

Hi

In the environment I'm currently working, axes are shown for schematic, symbol, and layout views.For schematics and symbols, I'd prefer a dim gray, such that the axes are just visible but not dominant. For the layout, I'd prefer a brighter color. Is there a way to realize this? So far when I change the color of the 'axis' layer in the display resource editor, the axes in all three views get changed together:

Thanks very much for your input!




ic

Netlisting error when doing parametric sweep on transient simulation

Dear all,

I defined two design variables in ADE Assembler, say V1 and V2, that define the voltage 1 and voltage 2 of a "vpulse" voltage source in my schematic.

Then, I define V1 = 1.0, and V2 = 2.0, run a transient simulation, and everything is as expexcted. The source provides pulses between 1.0 V and 2.0 V.

Next, I set V1 = 1.0:0.5:1.5, thereby creating a parametric sweep with 1.0 V and 1.5 V for V1. I keep V2 at 2.0 V. Then the simulation fails, and all I get is "netl err" in my Output Expressions and an error message that the results directory does not exist and nothing can be plotted: This is reasonable, as the results directory is deleted on starting a new simulation, and as there is no simulation result, none of my output expressions can be plotted.

WARNING (OCN-6040): The specified directory does not exist, or the directory does not contain valid PSF results.
        Ensure that the path to the directory is correct and the directory has a logFile and PSF result files.
WARNING (ADE-1065): No simulation results are available.
ERROR (WIA-1175): Cannot plot waveform signals because no waveform data is available for plotting.
One of the possible reasons can be that 'Save' check box for these signals are not selected in the Outputs Setup pane. Ensure that these check boxes are selected before you run the simulation.

Normally, this kind of para,metric sweep is not a problem, I have done this many times before. There must be something special in THIS PARTICULAR test bench or simulator setup. The trouble is, I don't get any useful error messages.

Does anyone know what might be the problem here OR where to find useful information to investigate further (log files stored somewhere)? Thank you!

Regards,

Volker

P.S. Using Corners instead does not help either. Running it through all values by hand works, though.




ic

How to use PSpice library in Virtuoso/Spectre?

I want to use PSpice model (download from TI) in Virtuoso , but it can not work. Please help me to check the error message, Thanks

ADE-> Setup-> simulation files->Pspice  Files  /TPS628502-Q1_TRANS.LIB

Parse error before token ']' in expression '[[STEADY_STATE]*0.6]'. If '[[STEADY_STATE]*0.6]'  is a spice expression, quotes are required for the expression.

ERROR(SFE-46): An instance of 'TPS628502-Q1_TRANS'  can have at most 8 terminals (but has 9).

*****************************************************************************
.SUBCKT TPS628502-Q1_TRANS COMP_FSET EN FB GND PG SW SYNC_MODE VIN
+ PARAMS: STEADY_STATE=0
V_U9_V45 U9_N16725824 0 5
E_U9_ABM22 U9_N16725392 0 VALUE { V(FREQ)*1e-12 }
X_U9_U161 U9_N16849713 U9_N16846056 one_shot PARAMS: T=20




ic

How to Set Up a Config View to Easily Switch Between Schematic and Calibre of DUT for Multiple Testbenches?

Hello everyone,

I hope you're all doing well. I’ve set up two testbenches (TB1 and TB2) for my Design Under Test (DUT) using Cadence IC6.1.8-64b.500.21 tools, as shown in the attached figure. The DUT has multiple views available: schematic, Calibre, Maestro, and Symbol, and each testbench uses the same DUT in different scenarios. Currently, I have to manually switch between these views, but I would like to streamline this process.

My goal is to use a single config view that allows me to switch between the schematic and the extracted (Calibre) views. Ideally, I would like to have a configuration file where making changes once would update both testbenches (TB1 and TB2) automatically. In other words, when I modify one config, both testbenches should reflect this update for a single simulation run.

I would really appreciate it if you could guide me on the following:

  1. How to create a config view for my DUT that can be used to easily switch between the schematic and extracted views, impacting both TB1 and TB2.
  2. Where to specify view priorities or other settings to control which view is used during simulation.
  3. Best practices for using a config file in this scenario, so that it ensures consistency across multiple testbenches.

Please refer to the attached figure to get a better understanding of the setup I’m using, where both TB1 and TB2 include the same DUT with multiple available views.

Thank you so much for your time and assistance!




ic

incorrect output of multiplication in jaspergold

I want to use jaspergold to formally verify functionality of my custom multiplier. I am computing the expected result using A*B to check against output of my multiplier. Here, A and B are two logic signed operands. However, jaspergold is performing the operation A*B incorrectly.

I have reproduced this issue using the attached example. JasperGold compiles and elaborates the module and subsequently runs a formal proof. The tool raises a counterexample to assertion whose screenshot is attached below:

I simulated the same example using xrun and it was giving the correct product output in simvision waveform. Please help me resolve this issue. I am using 2023.03 version of Jasper Apps.

Thanks and regards

Anubhav Agarwal




ic

explain/correct my understanding between average/covered in imc metrics

I'm working on the code coverage. Doing a metrics analysis by default we see overall average grade and overall covered. But when i do a block analysis on an instance i see overall covered grade, code covered grade, block covered grade, statement covered grade, expression covered grade, toggle covered grade.

As I dont know the difference I started to read the IMC user guide and came to know there are 3 things we come across while doing a code coverage local, covered, average

From my understanding

local - child instances metrics doesnt reach the parent level. For example, we have an instance Q and its sub instances like Q.a, Q.b. Block Local grade of Q can be 100% even when its instances Q.a and Q.b a block local grades isnt at 100%.

In the attached image there is formula 

The key difference between average and covered is the weights.

Average : Mathematically taking the above scenario where Q.a, and Q.b has 10 blocks each. Q.a has covered 8 blocks and q.b has covered 2 blocks. Now if we take the normal average it should be total covered/ totatl number = 8+2/10+10 yielding 50%. But when we add weights saying Q.a is 70% and Q.b is 30% the new number would be (8*0.7+2*0.3) / (10*0.7+10*0.3) resulting 62%. Because of the weights we see 12% bump.

Covered: there is no role of weights.

Among these 3 metrics i've changed my default view to this in the image to get more realistic picture when i do analyze metrics. Do you guys agree with the approach?




ic

How do I create a basic connectivity csv?

First time user of JasperGold. Chip level verif. I want to prove that an arbiter and a buffer are connected. I want to use the connectivity app to do that.

I see from the user guide, that I should provide a connectivity map, but i have no idea how to construct one.
The training videos said use this command: check_conn -generate_template jasper_template.xlsm -xlsm
But that did nothing, or at least it did not produce a file that i could find


[<embedded>] % check_conn -generate_template jasper_template.xlsm -xlsm
ERROR (ESW104): Invalid command formation.
Problem occurs with "-generate_template -xlsm".

And even if it did, I don't even know what's in the file, and whether it contains the two ips I'm trying to check.

I'm hoping someone can give me a bit of a boost here with some knowledge.




ic

"How to disable toggle coverage of unused logic"

I'm currently work in coverage analysis. In my design certain register bits remain unused, which could potentially lower toggle coverage. Specifically, I'd like to know how to disable coverage for specific unused register bits within a 32-bit register. For instance, I want to deactivate coverage for bit 17 and bit 20 in a 32-bit register to optimize toggle coverage. Could you please provide guidance on how to accomplish this?




ic

Is it possible to automatically exclude registers or wires that are not used from toggle coverage?

Hello,

I have a question about toggle coverage.

In my case, there are many unused registers or wires that are affecting the toggle coverage score negatively.

Is it possible to automatically exclude registers or wires that are not used from toggle coverage?

My RTL code is as follows, Is it possible to automatically disable tb.top1.b and tb.top1.c without using an exclude file?

module top1;

  reg a;

  reg b;

  reg [31:0] c;

  initial

  begin

  #1 a=1'b0;

  #1 a=1'b1;

  #1 a=1'b0;

  end

endmodule

module tb;

  top1 top1();

endmodule




ic

Simvision Array Slicing

> reg [63:0] rMem [0:255] signal

it can be confirmed by rMem [0:255] in Simvision

Is it possible to generate a new rMem1 signal and rMem2 signal by splitting it into 32 bits width through right-click> Create on rMem?




ic

Using vManager to identify line coverage from a specific test

I have been using the rank feature to identify tests that are redundant in our environment, but then I realized I'd also like to be able to see exactly what coverage goes into increasing the delta_cov value for a given test. If I had a test in my rank report that contributed 0.5% of the delta_cov, how could I got about seeing exactly where that 0.5% was coming from? It seems like that might be part of the correlate function, but I couldn't mange to find a way to see what specific coverage was being contributed for a given test.




ic

Archive of Tools Classification Analysis (Xcelium)

Hi,

Current and valid TCAs for Functional Safety are readily available at the FuSa "one-stop shop".

But I have not been able to find any archive repository for access to the obsoleted versions.

I would need to have also v1.4 of Xcelum TCA to investigate exact changes wrt previous projects.

Anyone knows how to find it?

Best regards,  Lars




ic

IntelliGen Statistics Metrics Collection Utilility

As noted in white papers, posts on the Team Specman Blog, and the Specman documentation, IntelliGen is a totally new stimulus generator than the original "Pgen" and, as a result, there is some amount of effort needed to migrate an existing verification environment to fully leverage the power of IntelliGen.  One of the main steps in migrating code is running the linters on your code and adressing the issues highlighted. 

Included below is a simple utility you can include in your environment that allows you to collect some valuable statistics about your code base to allow you to better gauge the amount of work that might be required to migrate from Pgen to IntelliGen.  The ICFS statistics reported are of particular benefit as the utility not only identifies the approximate number of ICFSs in the environment, it also breaks the total number down according to generation contexts (structs/units and gen-on-the-fly statements) allowing you to better focus your migration efforts. 

IMPORTANT: Sometimes a given environment can trigger a large number of IntelliGen linting messages right off the bat.  Don't let this freak you out!  This does not mean that migration will be a long effort as quite often some slight changes to an environment remove a large number of identified issues.  I recently encountered a situation where a simple change to three locations in the environment, removed 500+ ICFSs!

The methods included in the utility can be used to report information on the following:
- Number of e modules
- Number of lines in the environment (including blanks and comments)
- Number and type of IntelliGen Guidelines linting messages
- Number of Inconsistently Connected Field Sets (ICFSs)
- Number of ICFS contexts and how many ICFSs per context
- Number of soft..select overlays found in the envioronment
- Number of Laces identified in the environment


To use the code below, simply load it before/after loading e-code and then
you can execute any of the following methods:

- sys.print_file_stats()             : prints # of lines and files
- sys.print_constraint_stats()   : prints # of constraints in the environment
- sys.print_guideline_stats()    : prints # of each type of linting message
- sys.print_icfs_stats()            : prints # of ICFSs, contexts and #ICFS/context
- sys.print_soft_select_stats() : prints # of soft select overlay issues
- sys.print_lace_stats()           : *Only works for SPMNv6.2s4 and later* prints # of laces identified in the environment

Each of the above calls to methods produces it's own log files (stored in the current working directory) containing relevant information for more detailed analysis.
- file_stats_log.elog : Output of "show modules" command
- constraint_log.elog : Output of the "show constraint" command
- guidelines_log.elog : Output of "gen lint -g" (with notification set to MAX_INT in order to get all warnings)
- icfs_log.elog       : Output of "gen lint -i" command
- soft_select_log.elog: Output of the "gen lint -s" command
- lace_log.elog       : Output of the "show lace" command


Happy generating!

Corey Goss




ic

How to transfer trained an artificial neural network to Verilog-A

Hi all, I've trained a device model with the approach of an artificial neural network, and it shows well fit. 

May I know how to transfer the trained model to Verilog-A, so that, we can use this model to do circuit simulation?

And I've searched for some lectures that provide the Verilog-A code in the appendix, but I'm freshman in the field of Verilog-A, 

could anyone tell me each statement? such as

real hlayer-w[0:(NI*NNHL)-1   




ic

X-FAB's Innovative Communication and Automotive Designs: Powered by Cadence EMX Planar 3D Solver

Using the EMX solver, X-FAB design engineers can efficiently develop next-generation RF technology for the latest communication standards (including sub-6GHz 5G, mmWave, UWB, etc.), which are enabling technologies for communications and electric vehicle (EV) wireless applications. (read more)




ic

Overcoming Thermal Challenges in Modern Electronic Design

Melika Roshandell talks with David Malinak in a Microwaves & RF QuickChat video about the thermal challenges in today’s complex electronic designs and how the Celsius solver uniquely addresses them.(read more)





ic

Quickchat Video Interview: Introducing Cadence Optimality and OnCloud for Systems Analysis and Signoff

Microwaves & RF's David Maliniak interviews Sherry Hess of Cadence about recently announced products of Optimality and OnCloud.(read more)




ic

BoardSurfers: Managing Design Constraints Efficiently Using Constraint Sets

A constraint is a user-defined property, or a rule, applied to a physical object, such as a net, pin, or via in a design. There are a number of constraints that can be applied to an object based on its type and behavior. For example, you can define t...(read more)




ic

Modern Thermal Analysis Overcomes Complex Electronic Design Issues

By combining finite element analysis with computational fluid dynamics, designers can perform complete thermal system analysis using a single tool.(read more)




ic

BoardSurfers: Training Insights: What’s New in the Allegro PCB Editor Basic Techniques Course

The Allegro PCB Editor Basic Techniques course provides all the essential training required to start working with Allegro® PCB Editor. The course covers all the design tasks, including padstack and symbol creation, logic import, constraints setup...(read more)




ic

Harmonic Balance (HB) Large-Signal S-Parameter (LSSP) simulation

Dear all,

Hi!

I'm trying to do a Harmonic Balance (HB) Large-Signal S-Parameter (LSSP) simulation to figure out the input impedance of a nonlinear circuit.

Through this simulation, what I want to know is the large-signal S11 only (not S12, S21 and S22).

So, I have simulated with only single port (PORT0) at input, but LSSP simulation is terminated and output log shows following text.

" Analysis `hb' was terminated prematurely due to an error "

The LSSP simulation does not proceed without second port.

Should I use floating second port (which is not necessary for my circuit) to succeed the LSSP simulation?

Does the LSSP simulation really need two ports?

Below figure is my HB LSSP simulation setup.

Additionally, Periodic S-Parameter (PSP) simulation using HB is succeeded with only single port.

What is the difference between PSP and LSSP simulations?




ic

Error when trying to generate SUL license (-8)

Hi, newbie here.

We are using AWR Design Enviroment in our university and so I have to install it (OS: Arch Linux)
I installed it in a Windows 10 VM without problems.

When I try to start it prompts "Failed to connect to license server", I guess thats the first problem.

After that when trying to generate my SUL License it will prompt Internal Error -8 (see Image)

I can't find something on Error -8 :/ and overall the available data to the license topic is quit low :/

If someone has a solution for that I would gladly hear about it :)




ic

nport device S-parameter data file relative path

Hi,

In our design team, we're looking for a strategy to make all cell views self-contained. We are struggling to do so when nport devices are involved.

The nport file requires a full path, whereas what we need is a relative path to the current path of the cell in which we're using the nport.

I have browsed through the forums & cadence support pages, but could not find a solution.

1) There is a proposal from Andrew to add the file directory in ADE option "Simulation Files." :https://community.cadence.com/cadence_technology_forums/f/rf-design/27167/s-parameter-datafile-path-in-nport . This, however, is not suitable, because the cell is not self contained.

2) The new cadence version off DataSource "cellView" in nport options:

This however is not suitable for us due to two reasons:

i- Somehow we don't get this option in the nport cell (perhaps some custom modification from our PDK team)

ii- Even if we had this option, it requires to select the library, which again makes it unsuitable: We often copy design libraries for derivative products using "Hierarchical Copy" feature. And when the library is copied, the nport will still be pointing to the old library. Thus, it is still not self-contained.

In principle, it should not be difficult (technically) to point to a text file relative to the cell directory (f.ex we can make a folder under the same cell with name "sparFiles" & place all spar files under this folder), however it does not seem to be possible.

Could you perhaps recommend us a work-around to achieve our goal: making the cells which contain nport devices self-contained so that when we copy a cell, we do not have to update all the nport file destinations ?

Thanks in advance.

My Cadence Version: IC23.1-64b.ISR4.51

My Spectre version: 23.1.0.362.isr5




ic

HB: duplicated frequencies in 3-tone simulation

I get multiple results at the same frequency in a 3-tone simulation.

I try to determine the IP3 of a mixer. I have 3 large signal tones: 0.75 GHz, 1.25 GHz and 1.26 GHz.

At the IM3 frequency of 490 MHz I observe 4 results, see also the screenshot of the table output. The frequencies are exactly the same (even when I subtract 490 MHz by using xval() ).

Which of the values do I have to use to determine the correct IP3?

Is there an option to merge these results?




ic

Figures missing in the RF Design Blogs article of "Measuring Fmax for MOS Transistors"

Hi I noticed that some figures from the old posts in the cadence blogs have been missing.

I think this problem happened before and Andrew Beckett asked the original author to fix the issue:

 Figures missing in the RF Design Blogs article of "Measuring Fmax for MOS Transistors" 

Some of these posts are quite valuable, and would be nice to have access to the figures, which are a very important part of some posts,

Thanks

Leandro




ic

Virtuoso ICADVM20.1 ISR26 and IC6.1.8 ISR26 Now Available

The ICADVM20.1 ISR26 and IC6.1.8 ISR26 production releases are now available for download.(read more)




ic

Start Your Engines: An Innovative and Efficient Approach to Debug Interface Elements with SimVision MS

This blog introduces you to an efficient way to debug interface elements or connect modules in a mixed-signal simulation.(read more)