cr

LA County's Whole Person Care analytics program offers crucial flexibility  

When Los Angeles County invested in Whole Person Care (WPC) it could not have known just how important the system’s flexibility would be. Anyone who has had an interface with health care delivery, policy, oversight and management know things change quickly. As data becomes a priority, expectations of the use [...]

The post LA County's Whole Person Care analytics program offers crucial flexibility   appeared first on Government Data Connection.




cr

CloudWalker 55 Inches 4K Ultra HD Smart LED Screen (55SUA7) Review

Read the in depth Review of CloudWalker 55 Inches 4K Ultra HD Smart LED Screen (55SUA7) TV. Know detailed info about CloudWalker 55 Inches 4K Ultra HD Smart LED Screen (55SUA7) configuration, design and performance quality along with pros & cons, Digit rating, verdict based on user opinions/feedback.




cr

Creative Outlier Air Review

Read the in depth Review of Creative Outlier Air Audio Video. Know detailed info about Creative Outlier Air configuration, design and performance quality along with pros & cons, Digit rating, verdict based on user opinions/feedback.




cr

DmC: Devil May Cry Review

Read the in depth Review of DmC: Devil May Cry Gaming. Know detailed info about DmC: Devil May Cry configuration, design and performance quality along with pros & cons, Digit rating, verdict based on user opinions/feedback.




cr

Microsoft Xbox One S 500GB Console White Review

Read the in depth Review of Microsoft Xbox One S 500GB Console White Gaming. Know detailed info about Microsoft Xbox One S 500GB Console White configuration, design and performance quality along with pros & cons, Digit rating, verdict based on user opinions/feedback.




cr

Micromax In 1 Review

Read the in depth Review of Micromax In 1 Mobile Phones. Know detailed info about Micromax In 1 configuration, design and performance quality along with pros & cons, Digit rating, verdict based on user opinions/feedback.




cr

Micromax Canvas Nitro A310 Review

Read the in depth Review of Micromax Canvas Nitro A310 Mobile Phones. Know detailed info about Micromax Canvas Nitro A310 configuration, design and performance quality along with pros & cons, Digit rating, verdict based on user opinions/feedback.




cr

Micromax Canvas Nitro 4G Review

Read the in depth Review of Micromax Canvas Nitro 4G Mobile Phones. Know detailed info about Micromax Canvas Nitro 4G configuration, design and performance quality along with pros & cons, Digit rating, verdict based on user opinions/feedback.




cr

Micromax In 2c Review

Read the in depth Review of Micromax In 2c Mobile Phones. Know detailed info about Micromax In 2c configuration, design and performance quality along with pros & cons, Digit rating, verdict based on user opinions/feedback.




cr

How to create your own TOR url

NA




cr

How to mirror your Android or iOS smartphone screen to a Windows/Mac PC

Mirroring your smartphone’s screen onto a Windows/Mac PC is fairly simple with these apps.




cr

Microsoft has turned off File Explorer Ads and here’s how you can disable other Windows Ads

Microsoft Ads on File Explorer were subject to widespread criticism and the company claimed it was “experimental” and “turned off”. But, there are other Windows 10 or Windows 11 ads you may want to get rid of.




cr

Automate the creation of a range attribute map in SAS

In SAS, range attribute maps enable you to specify the range of values that determine the colors used for graphical elements. There are various examples that use the GTL to define a range attribute map, but fewer examples that show how to use a range attribute map with PROC SGPLOT. [...]

The post Automate the creation of a range attribute map in SAS appeared first on The DO Loop.




cr

Jailed Gangster's Wife Tries To Extort Rs 2 Crore From Hotelier, Arrested

Police on Monday arrested Manisha, wife of jailed gangster Kaushal Chaudhary, for allegedly trying to extort Rs 2 crores from a hotel owner, officials said.




cr

6 Killed, 1 Injured As Car Crashes Into Truck In Dehradun

Six people were killed and one sustained serious injuries when their car collided with a truck here early Tuesday, police said.







cr

INCREDIBLE early Black Friday deal saves you on Samsung wearables and accessories!

Samsung has an early Black Friday deal for its accessories and wearables, so don’t miss out if you want some savings!

The post INCREDIBLE early Black Friday deal saves you on Samsung wearables and accessories! appeared first on Phandroid.




cr

Brace For Impact! Maruti Will Increase Price Of Almost All Cars By This Date: Check Full Details

India’s largest carmaker Maruti Suzuki India Limited (MSIL) has announced that it will hike the prices of its models from January 2023. It said the increase will vary for different models. Why? In a statement the automaker explained its struggles and the reason behind the hikes. “The Company continues to witness increased cost pressure driven […]




cr

Shorts Break By Armoks Media Becomes #1 YouTube Creator In India For Shorts

Youtube has released its annual A YEAR ON YOUTUBE list for 2022, and there is some explosive news coming in from the house of Armoks Media. Shorts Break from Armoks Media has become the #1 Youtube Creator for Shorts videos in India, as their video: Baarish me Bheegna has been ranked #1 in their list.  […]




cr

Apple & Samsung Exported Rs 40,000 Crore Of Smartphones From India: Apple Can Beat Samsung Very Soon!

Apple is in fast pace catching up with Samsung in India as far as smartphone exports from the country are concerned.  Apple was not far behind at $2.2 billion at the same time Samsung’s smartphone exports in value stood at around $2.8 billion for the April-October period. Apple Scaling Up Exports In India It is […]




cr

300 Microsoft Employees Create Employee Union, First Time Ever: This Is How Microsoft Reacted

Around 300 workers at Microsoft Corp.’s ZeniMax Studios have commenced the process of forming a union which is said to be the first at the software giant in the US.  Here, Microsoft Corp.’s ZeniMax Studios known for popular video games including Skyrim and Fallout. Forming Union In Microsoft Corp Moreover, the quality assurance employees at […]




cr

Vesper closes $23M Series B for its sensor-based microphone: Amazon Alexa Fund among investors

Vesper, the maker of piezoelectric sensors used in microphone production and winner of CES Innovation Award 2018 raised a $23M Series B round. American Family Ventures led the investment with participation from Accomplice, Amazon Alexa Fund, Baidu, Bose Ventures, Hyperplane, Sands Capital, Shure, Synaptics, ZZ Capital and some undisclosed investors.

Vesper VM1000

Vesper’s innovative sensors can be used in consumer electronics like TV remote controls, smart speakers, smartphones, intelligent sensor nodes, and hearables. The company will use the funding proceeds to scale-up its functions like mass production of its microphones and support expanded research and development, hiring, and establishing international sales offices.

The main product of Vesper is VM1000, a low noise, high range,single-ended analog output piezoelectric MEMS microphone. It consists of a piezoelectric sensor and circuitry to buffer and amplify the output.

Vesper VM1010

The hot-selling product of Vesper is VM1010 with ZeroPower Listening which is the first MEMS microphone that enables voice activation to battery-powered consumer devices.

The unique selling point of Vesper’s products is they are built to operate in rugged environments that have dust and moisture.

"Vesper's ZeroPower Listening capabilities coupled with its ability to withstand water, dust, oil, and particulate contaminants enables users that have never before been possible," said Katelyn Johnson, principal of American Family Ventures. "We are excited about Vesper's quest to transform our connected world, including IoT devices."

Other recent funding news include $24 raised by sensor-based baby sock maker Owlet, IFTTT banks $24M from Salesforce to scale its IoT Enterprise offering, and Intel sells its Wind River Software to TPG.




cr

Microsoft buys conversational AI company Semantic Machines for an undisclosed sum

Microsoft announced it has acquired Semantic Machines, a conversational AI startup providing chatbots and AI chat apps founded in 2014 having $20.9 million in funding from investors. The acquisition will help Microsoft catch up with Amazon Alexa, though the latter is more focused on enabling consumer applications of conversational AI.

Microsoft will use Semantic Machine’s acquisition to establish a conversational AI center of excellence in Berkeley to help it innovate in natural language interfaces.

Microsoft has been stepping up its products in conversational AI. It launched the digital assistant Cortana in 2015, as well as social chatbots like XiaoIce. The latest acquisition can help Microsoft beef up its ‘enterprise AI’ offerings.

As the use of NLP (natural language processing) increases in IoT products and services, more startups are getting traction from investors and established players. In June last year, Josh.ai, avoice-controlled home automation software has raised $8M.

Followed by it was SparkCognition that raised $32.5M Series B for its NLP-based threat intelligence platform.

It appears Microsoft’s acquisition of Semantic Machines was motivated by the latter’s strong AI team. The team includes technology entrepreneur Daniel Roth who sold his previous startups Voice Signal Technologies and Shaser BioScience for $300M and $100M respectively. Other team members include Stanford AI Professor Percy Liang, developer of Google Assistant Core AI technology and former Apple chief speech scientist Larry Gillick.

“Combining Semantic Machines' technology with Microsoft's own AI advances, we aim to deliver powerful, natural and more productive user experiences that will take conversational computing to a new level." David Ku, chief technology officer of Microsoft AI & Research.






cr

Gqeberha Flying Squad Clamp Down On Criminals

[SAPS] - Gqeberha Flying Squad members clamped down on criminals involved in illegal abalone activities and robbery suspects in two unrelated incidents.




cr

Joburg's Water Restrictions Set to Tighten Further As Crisis Deepens

[Daily Maverick] Office of the Chief Justice reveals Constitutional Court has been unable to sit because of unreliable water supply. This article is free to read.Sign up for free or sign in to continue reading.Unlike our competitors, we don't force you to pay to read the news but we do need your email address to make your experience better.Create your free account or sign in FAQ | Contact Us Nearly there! Create a password to finish signing up with us: You want to receive First Thing, our flagship daily newsletter. Opt




cr

Gauteng Police to Raid Spaza Shops in Food Safety Crackdown - South African News Briefs - November 11, 2024

[allAfrica]




cr

How to create multiple shapes of same port in innovus?

LEF allows the same port with multiple shape definitions. Does anybody know if innovus can create multiple duplicate shapes associated with the same port? Assume they are connected outside the block with perfect timing synchronization. Thank you!




cr

Tool to create *.lib and *.db files for designs made in Innovus

Hi all, 

I have made a custom cell in Innovus that I will be instantiating into a bigger block, which I will also be using Innovus to do the Place & Route. 

I understand that I can generate a *.lef file and a *.lib file using Innovus. However, I need to also create a *.db file (these format of files are often used in DC Compiler synthesis tool). 

Is there a way to create the *.db file from Innovus? Or, is there a tool that I can use to create this *.db file? 

Thank you for your time. 




cr

IR Drop Criteria

IR criteria:

Static IR (STD) ~2%

Static IR (MEM) ~1%

Dynamic IR (STD) ~10%

Dynamic IR (MEM) ~5%

Anyone knows the reason behind this criteria? >.<




cr

How to create draw region button like the one used in the Area and Density calculator

Hello,

I would like to create a button for my form that prompts the user to click on a cellview and draw a rectangle bounding box, exactly like the one used in the Area and Density Calculator. Can someone please help me with this?

Thanks!

Beto




cr

Error ASSEMBLER-1600 when running script with two different MC simulations

Hello Community,

I have encountered an issue that is a mystery to me and hope somebody could give me a clue about what is happening in Cadence and maybe even a solution?

I am running a test scripted in a SKILL file that sequentially opens two different projects with MC analyses and in between I get an error message box and also multiple logs in CIW with exactly the same text.

 

Both projects run a simulation with a call like this:

historyName = maeRunSimulation(?session sessionName ?waitUntilDone t)

 

After this the script closes the current project, opens the next project and executes the same line with maeRunSimulation() for the second project. Then immediately this error message happens, and also is logged repeatedly in the CIW window

 

The message box looks like this:

The logs I get in CIW:

 

nil
hiCancelProgressBox(_axlNetlistCreateProgressBar)
nil
hiCancelProgressBox(_axlUILoadForm)
nil
when(dwindow('axlDataViewessWindow1) hiMapWindow(dwindow('axlDataViewessWindow1)))
t
when(dwindow('axlRunSummaryessWindow1) hiMapWindow(dwindow('axlRunSummaryessWindow1)))
t
ERROR (ASSEMBLER-1600): Cannot find an active session named fnxSession0.
You can only modify an ADE Assembler session that is active.
Perhaps the session name was misspelled or has not yet been created.
Verify the session name matches an existing ADE Assembler session.

1>
ERROR (ASSEMBLER-1600): Cannot find an active session named fnxSession0.
You can only modify an ADE Assembler session that is active.
Perhaps the session name was misspelled or has not yet been created.
Verify the session name matches an existing ADE Assembler session.

*WARNING* hiDisplayAppDBox: modal dbox 'adexlMessageDialog' is already displayed!
ERROR (ASSEMBLER-1600): Cannot find an active session named fnxSession0.
You can only modify an ADE Assembler session that is active.
Perhaps the session name was misspelled or has not yet been created.
Verify the session name matches an existing ADE Assembler session.

*WARNING* hiDisplayAppDBox: modal dbox 'adexlMessageDialog' is already displayed!
ERROR (ASSEMBLER-1600): Cannot find an active session named fnxSession0.
You can only modify an ADE Assembler session that is active.
Perhaps the session name was misspelled or has not yet been created.
Verify the session name matches an existing ADE Assembler session.




cr

Cross-probe between layout veiw and schematic view

Hi there

I am trying to make cross-probe btw layout and schematic view.

so when I execute the code in schematic using bindkey, the code will raise the layout view (hiRaiseWindow)

and then I want to descend to the same hierarchy as schematic. (geSelectFig, leHiEditInPlace)

But looks like current cellview still stays at schematic view.

I got this error msg, and when I print current cell view name at where I got this msg, it replys schematic.

*Error* geSelectFig: argument #1 should be a database object (type template = "d") - nil

is there any way to change the current cellview to layout view?

I also added this code, but didn't work.

geGetEditCellView(geGetCellViewWindow(cvId)) ;cvId is layout view

I don't want to close the schematic view, just want to move the focus or make geSelectFig works.

Thanks in advance.




cr

New Training Courses for RF/Microwave Designers Featuring Cadence AWR Software

Cadence AWR Design Environment Software Featured in Multiple Training Course Options: Live and Virtual Starting in October(read more)




cr

Knowledge Booster Training Bytes - The Close Connection Between Schematics and Their Layouts in Microwave Office

Microwave Office is Cadence’s tool-of-choice for RF and microwave designers designing everything from III-V 5G chips, to RF systems in board and package technologies. These types of designs require close interaction between the schematic and its layout. A new Training Byte demonstrates how the schematic-layout connections is built into Microwave Office.(read more)




cr

Knowledge Booster Training Bytes - Working with Data Sets in Microwave Office

Data sets are a powerful and easy-to-use feature in Microwave Office. Data can be effortlessly be swapped in graphs, and circuit schematics.(read more)





cr

Training Webinar: Microwave Office: An Integrated Environment for RF and Microwave Design

A recording of a training webinar on Microwave Office is available. Topics show the design environment, with special emphasis placed on electromagnetic (EM) simulation. Normal 0 false false false EN-US JA X-NONE ...(read more)




cr

Training Insights New Course: Planar EM Simulation in AWR Microwave Office

New online training course for AXIEM EM Simulator in AWR Microwave Office is available.(read more)





cr

Creating Power and Ground rings in Allegro X Package Designer Plus

Power and Ground rings are exposed rings of metal surrounding a die that supply power/ground to the die and create a low-impedance path for the current flow. These rings ensure stable power distribution and reduce noise. Allegro X Package Designer Plus has a utility called Power/Ground Ring Generator which lets you define and place one or more shapes in the form of a ring around a die.

 To run the PWR/GND Generator Wizard, go to Route > Power/Ground Ring Generator or type "pring wizard" in the APD command window to invoke the Wizard.

   

This Wizard lets you define and place one or more shapes in the form of a ring around a die. The Power/Ground Ring Wizard creates up to 12 rings (shapes) at a time. If you require more rings, you can run the Power/Ground Ring Wizard as many times as needed. This command displays a wizard in which you can specify:

  • The number of rings to be generated
  • The creation of the first ring as a die flag (Die flag is the boundary of the die like the power ring.)
    • If you create a die flag and the first ring is the same net as the flag, you can enter a negative distance to overlap the ring and the die flag.
  • Multiple options for placement of the rings with respect to:
    • Origination point
    • Distance from the edge of the die
    • Distance from the nearest die pin on each die side
  • The reference designator of the die with which the rings will be used
  • The distance between rings
  • The width of each ring
  • The corner types on each ring (arc, chamfer, and right-angle)
  • An assigned net name for each ring
  • A label for each ring

The rings are basic in nature. For other shape geometries or split rings, choose Shape > Polygon or Shape > Compose/Decompose Shape from the menu in the design window.

Depending on the options selected, the Power/Ground Ring Wizard UI changes, representing how the rings will be created. Verify the Wizard settings to ensure that the rings are created as intended.

  1. When the Power/Ground Ring Wizard appears, set the number of rings to 2, accept the other defaults, and click Next. You can set Create first ring as die flag to create a basic die flag.

         2. Define Ring 1 and the net associated with it.

              a) Browse and choose Vss in the Net Names dialog box.

            b) Click OK.

            c) Specify the label as VSS.

            d) Click Next.

             The first ring should appear in your design. It is associated with the proper net; in this case, VSS.

  1. For the second ring, choose the net as Vdd and specify the label as VDD.
  2. Click Next.
  3. Click Finish in the Result Verification screen to complete the process.

The completed rings appear as shown below.

Now, when you click on Power and Ground Die Pin and add wirebonds, you will see that the wirebonds are placed directly on the Power and Ground rings.




cr

Flow Control Credit Updates in PCIe 6.1 ECN

As technology continues to evolve at a rapid pace, the importance of robust and efficient interconnect standards cannot be overstated. Peripheral Component Interconnect Express (PCIe) has been a cornerstone in high-speed data transfer, enabling seamless communication between various hardware components.   

With the advent of PCIe 6.1 ECN, a significant advancement in speed and efficiency, ensuring the accuracy and reliability of its operations is paramount. One critical aspect of this is the verification of shared credit updates. For detailed understanding on Shared Credit, please refer Understanding PCIe 6.0 Shared Flow Control. 

In this blog, we will discuss why this verification is essential and what it entails.  

Introduction 

PCIe 6.1 ECN brings numerous advancements over earlier versions, such as increased bandwidth and faster data transfer speeds.   

A crucial mechanism for efficient data transmission in PCIe 6.0 is the credit-based flow control system. In this system, devices monitor credits, representing the buffer capacity available for incoming data.   

When a device transmits data, it uses credits, which are replenished or adjusted once the data is received and processed. This system ensures that the sender does not overload the receiver.  

Given the critical role of shared credit updates in maintaining the integrity and efficiency of data transfers, verification of these updates is crucial.  Proper management of credit updates is essential to ensure data integrity, as any discrepancies can lead to data loss, corruption, or system crashes.   

Verification also guarantees efficient resource allocation, preventing scenarios where some components are starved of credit while others have an excess, thus avoiding inefficiencies.  Credit inefficiencies pose issues in low power negotiations by preventing devices from entering low power states. Additionally, verification involves checking for proper error handling mechanisms, ensuring that the system can recover gracefully from errors in credit updates and maintain overall stability.   

PCIe 6.1 ECN Flow Control Optimizations Over PCIe 6.0

PCIe 6.1 ECN builds on the FLIT-based architecture introduced in PCIe 6.0, further optimizing flow control mechanisms to handle increased data rates and improved efficiency.  PCIe 6.1 ECN introduced refinements in credit management, making the allocation and advertisement of credits more precise, which helps in reducing bottlenecks and improving data flow efficiency.  Enhancements in flow control protocols ensure better management of buffer spaces and more efficient credit allocation. These enhancements are designed to handle the increased data rates and throughput demands of next-generation applications, ensuring robust and efficient data flow across PCIe devices.  

Below are some major updates: 

  1. There have been improvements in error detection and correction mechanisms in PCIe 6.1 ECN to enhance flow control reliability by ensuring that corrupted data packets are detected and handled appropriately without disrupting the flow of valid packets.  
  2. The merged credit system, which was a key feature introduced int PCIe 6.0 to simplify and optimize credit management, was further enhanced in PCIe 6.1 ECN to improve performance and efficiency.  
  3. PCIe 6.1 ECN introduced better algorithms for allocating and reclaiming merged credits to handle high data rates, introduced more robust error detection and correction mechanism reducing the degradation or system instability. 
  4. PCIe 6.1 ECN provided clear guidelines on how to implement the merged credit system correctly, helping developers to implement more reliable systems. For more details, please refer to Specifications section 2.6.1 Flow Control (FC) Rules.

Summary 

In summary, PCIe 6.0 is a complex protocol with many verification challenges. You must understand many new Spec changes and think about the robust verification plan for the new features and backward compatible tests impacted by new features. Cadence’s PCIe 6.0 Verification IP is fully compliant with the latest PCIe Express 6.0 specifications and provides an effective and efficient way to verify the components interfacing with the PCIe 6.0 interface. Cadence VIP for PCIe 6.0 provides exhaustive verification of PCIe-based IP and SoCs, and we are working with early adopter customers to speed up every verification stage.   

More Information

For more info on how Cadence PCIe Verification IP and Triple Check VIP enable users to confidently verify PCIe 6.0, see VIP for PCI Express, VIP for Compute Express Link  and TripleCheck for PCI Express  

See the PCI-SIG website for more details on PCIe in general and the different PCI standards.  

For more information on PCIe 6.0 new features, please visit PCIeLaneMarginPCIe6.0LaneMargin, and Demonstrating PCIe 6.0 Equalization Procedure.




cr

Partial Header Encryption in Integrity and Data Encryption for PCIe

Cadence PCIe/CXL VIP support for Partial Header Encryption in Integrity and Data Encryption.(read more)




cr

Versatile Use Case for DDR5 DIMM Discrete Component Memory Models

DDR5 DIMM Architectures The DDR5 generation of Double Data Rate DRAM memories has experienced rapid adoption in recent years. In particular, the JEDEC-defined DDR5 Dual Inline Memory Module (DIMM) cards have become a mainstay for systems looking for high-density, high-bandwidth, off-chip random access memory[1]. Within a short time, the DIMM architecture evolved from an interconnected hierarchy of only SDRAM memory devices (UDIMM[2]) to complex subsystems of interconnected components (RDIMM/LRDIMM/MRDIMM[3]). DIMM Designs and Popular Verification Use Cases The growing complexity of the DIMMs presented a challenge for pre-silicon verification engineers who could no longer simply validate against single DDR5 SDRAM memory models. They needed to consider how their designs would perform against DIMMs connected to each channel and operating at gigahertz clock speeds. To address this verification gap, Cadence developed DDR5 DIMM Memory Models that encapsulated all of the architectural complexities presented by real-world DIMMs based on a robust, easy-to-use, easy-to-debug, and easy-to-reconfigure methodology. This memory-subsystem-in-a-single-instance model has seen explosive adoption among the traditional IP Developer and SOC Integrator customers of Cadence Memory Models. The Cadence DIMM models act as a single unit with all of the relevant DIMM components instantiated and interconnected within, and with all AC/Timing parameters among the various components fully matched out-of-the-box, based on JEDEC specifications as well as datasheets of actual devices in the market. The typical use-case for the DIMM models has been where the DUT is a DDR5 Memory Controller + PHY IP stack, and the validation plan mandated compliance with the JEDEC standards and Memory Device vendor datasheets. Unique Use Case for the DIMM Discrete Component Models Although the Cadence DIMM models have enjoyed tremendous proliferation because of their cohesive implementation and unified user API, the actual DIMM Models are built on top of powerful, flexible discrete component models, each of which was designed to stand on its own as a complete SystemVerilog UVM-based VIP. All of these discrete component models exist in the Cadence VIP Catalog as standalone VIPs, complete with their own protocol compliance checking capabilities and their own configuration mappings comprehensively modeling individual AC/Timing parameters. Because of this deliberate design decision, the Cadence DIMM Discrete Component Models can support a unique use-case scenario. Some users seek to develop IC Designs for the various DIMM components. Such users need verification environments that can model the individual components of a DIMM and allow them the option to replace one or another component with their Component Design IP. They can then validate that their component design is fully compatible with the rest of the components on the DIMM and meets the integrity of the overall DIMM compliance with JEDEC standards or Memory Vendor datasheets. The Cadence Memory VIP portfolio today includes various examples that demonstrate how customers can create DIMM “wrappers” by selecting from among the available DIMM discrete component models and “stitching” them together to build their own custom testbench around their specific Component Design IP. A Solution for Unique Component Scenarios The Cadence DDR5 DIMM Memory Models and DIMM Discrete Component Models can provide users with a flexible approach to validating their specific component designs with a fully populated pre-silicon environment. Augmented Verification Capabilities When the DIMM “wrapper” model is augmented with the Cadence DFI VIP[4] that can simulate an MC+PHY stack and offers a SystemVerilog UVM test API to the verification engineer, the overall testbench transforms into a formidable pre-silicon validation vehicle. The DFI VIP is designed as a combination of an independent DFI MC VIP and a DFI PHY VIP connected to each other via the DFI Standard Interface and capable of operating seamlessly as a single unit. It presents a UVM Sequence API to the user into the DFI MC VIP with the Memory Interface of the PHY VIP connected to the DIMM “wrapper” model. With this testbench in hand, the user can then fully take advantage of the UVM Sequence Library that comes with the DFI VIP to enable deep validation of their Component Design inside the DIMM “wrapper” model. Verification Capabilities Further Enhanced A possible further enhancement comes with the potential addition of an instance of the Cadence DIMM Memory Model in a Passive Monitor mode at the DRAM Memory Interface. The DIMM Passive Monitor consumes the same configuration describing the DIMM “wrapper” in the testbench, and thus can act as a reference model for the DIMM wrapper. If the DIMM Passive Monitor responds successfully to accesses from the DFI VIP, but the DIMM wrapper does not, then it exposes potential bugs in the DUT Components or in the settings of their AC/Timing parameters inside the DIMM wrapper. Debuggability, Interface Visibility, and Protocol Compliance One of the key benefits of the DIMM Discrete Component Models that become manifest, whether in terms of the unique use-case scenario described here, or when working with the wholly unified DDR5 DIMM Memory Models, is the increased debuggability of the protocol functionality. The intentional separation of the discrete components of a DIMM allows the user to have full visibility of the memory traffic at every datapath landmark within a DIMM structure. For example, in modeling an LRDIMM or MRDIMM, the interface between the RCD component and the SDRAM components, the interface between the RCD component and the DB components, and the interface between the SDRAM components and the DB components—all are visible and accessible to the user. The user has full access to dump the values and states of the wire interconnects at these interfaces to the waveform viewer and thus can observe and correlate the activity against any protocol violations flagged in the trace logs by any one or more of the DIMM Discrete Component Models. Access to these interfaces is freely available when using the DIMM Discrete Component Models. On the unified DDR5 DIMM Memory Models, a feature called Debug Ports enables the same level of visibility into the individual interconnects amidst the SDRAM components, RCD components, and DB components. When combined with the Waveform Debugger[5] capability that comes built-in with the VIPs and Memory Models offered by Cadence and used with the Cadence Verisium Debug[6] tool, the enhanced debuggability becomes a powerful platform. With these debug accesses enabled, the user can pull out transaction streams, chip state and bank state streams, mode register streams, and error message streams all right next to their RTL signals in the same Verisium Debug waveform viewer window to debug failures all in one place. The Verisium Debug tool also parses all of the log files to probe and extract messages into a fully integrated Smart Log in a tabbed window fully hyperlinked to the waveform viewer, all at your fingertips. A Solution for Every Scenario Cadence's DDR5 DIMM Memory Models and DIMM Discrete Component Models , partnered with the Cadence DFI VIP, can provide users with a robust and flexible approach to validating their designs thoroughly and effectively in pre-silicon verification environments ahead of tapeout commitments. The solution offers unparalleled latitude in debuggability when the Debug Ports and Waveform Debugger functions of the Memory Models are switched on and boosted with the use of the Cadence Verisium Debug tool. [1] Shyam Sharma, DDR5 DIMM Design and Verification Considerations , 13 Jan 2023. [2] Shyam Sharma, DDR5 UDIMM Evolution to Clock Buffered DIMMs (CUDIMM) , 23 Sep 2024. [3] Kos Gitchev, DDR5 12.8Gbps MRDIMM IP: Powering the Future of AI, HPC, and Data Centers , 26 Aug 2024. [4] Chetan Shingala and Salehabibi Shaikh, How to Verify JEDEC DRAM Memory Controller, PHY, or Memory Device? , 29 Mar 2022. [5] Rahul Jha, Cadence Memory Models - The Gold Standard , 15 Apr 2024. [6] Manisha Pradhan, Accelerate Design Debugging Using Verisium Debug , 11 Jul 2023.




cr

Randomization considerations for PCIe Integrity and Data Encryption Verification Challenges

Peripheral Component Interconnect Express (PCIe) is a high-speed interface standard widely used for connecting processors, memory, and peripherals. With the increasing reliance on PCIe to handle sensitive data and critical high-speed data transfer, ensuring data integrity and encryption during verification is the most essential goal. As we know, in the field of verification, randomization is a key technique that drives robust PCIe verification. It introduces unpredictability to simulate real-world conditions and uncover hidden bugs from the design. This blog examines the significance of randomization in PCIe IDE verification, focusing on how it ensures data integrity and encryption reliability, while also highlighting the unique challenges it presents. For more relevant details and understanding on PCIe IDE you can refer to Introducing PCIe's Integrity and Data Encryption Feature . The Importance of Data Integrity and Data Encryption in PCIe Devices Data Integrity : Ensures that the transmitted data arrives unchanged from source to destination. Even minor corruption in data packets can compromise system reliability, making integrity a critical aspect of PCIe verification. Data Encryption : Protects sensitive data from unauthorized access during transmission. Encryption in PCIe follows a standard to secure information while operating at high speeds. Maintaining both data integrity and data encryption at PCIe’s high-speed data transfer rate of 64GT/s in PCIe 6.0 and 128GT/s in PCIe 7.0 is essential for all end point devices. However, validating these mechanisms requires comprehensive testing and verification methodologies, which is where randomization plays a very crucial role. You can refer to Why IDE Security Technology for PCIe and CXL? for more details on this. Randomization in PCIe Verification Randomization refers to the generation of test scenarios with unpredictable inputs and conditions to expose corner cases. In PCIe verification, this technique helps us to ensure that all possible behaviors are tested, including rare or unexpected situations that could cause data corruption or encryption failures that may cause serious hindrances later. So, for PCIe IDE verification, we are considering the randomization that helps us verify behavior more efficiently. Randomization for Data Integrity Verification Here are some approaches of randomized verifications that mimic real-world traffic conditions, uncovering subtle integrity issues that might not surface in normal verification methods. 1. Randomized Packet Injection: This technique randomized data packets and injected into the communication stream between devices. Here we Inject random, malformed, or out-of-sequence packets into the PCIe link and mix valid and invalid IDE-encrypted packets to check the system’s ability to detect and reject unauthorized or invalid packets. Checking if encryption/decryption occurs correctly across packets. On verifying, we check if the system logs proper errors or alerts when encountering invalid packets. It ensures coverage of different data paths and robust protocol check. This technique helps assess the resilience of the IDE feature in PCIe in below terms: (i) Data corruption: Detecting if the system can maintain data integrity. (ii) Encryption failures: Testing the robustness of the encryption under random data injection. (iii) Packet ordering errors: Ensuring reordering does not affect data delivery. 2. Random Errors and Fault Injection: It involves simulating random bit flips, PCRC errors, or protocol violations to help validate the robustness of error detection and correction mechanisms of PCIe. These techniques help assess how well the PCIe IDE implementation: (i) Detects and responds to unexpected errors. (ii) Maintains secure communication under stress. (iii) Follows the PCIe error recovery and reporting mechanisms (AER – Advanced Error Reporting). (iv) Ensures encryption and decryption states stay synchronized across endpoints. 3. Traffic Pattern Randomization: Randomizing the sequence, size, and timing of data packets helps test how the device maintains data integrity under heavy, unpredictable traffic loads. Randomization for Data Encryption Verification Encryption adds complexity to verification, as encrypted data streams are not readable for traditional checks. Randomization becomes essential to test how encryption behaves under different scenarios. Randomization in data encryption verification ensures that vulnerabilities, such as key reuse or predictable patterns, are identified and mitigated. 1. Random Encryption Keys and Payloads: Randomly varying keys and payloads help validate the correctness of encryption without hardcoding assumptions. This ensures that encryption logic behaves correctly across all possible inputs. 2. Randomized Initialization Vectors (IVs): Many encryption protocols require a unique IV for each transaction. Randomized IVs ensure that encryption does not repeat patterns. To understand the IDE Key management flow, we can follow the below diagram that illustrates a detailed example key programming flow using the IDE_KM protocol. Figure 1: IDE_KM Example As Figure 1 shows, the functionality of the IDE_KM protocol involves Start of IDE_KM Session, Device Capability Discovery, Key Request from the Host, Key Programming to PCIe Device, and Key Acknowledgment. First, the Host starts the IDE_KM session by detecting the presence of the PCIe devices; if the device supports the IDE protocol, the system continues with the key programming process. Then a query occurs to discover the device’s encryption capabilities; it ensures whether the device supports dynamic key updates or static keys. Then the host sends a request to the Key Management Entity to obtain a key suitable for the devices. Once the key is obtained, the host programs the key into the IDE Controller on the PCIe endpoint. Both the host and the device now share the same key to encrypt and authenticate traffic. The device acknowledges that it has received and successfully installed the encryption key and the acknowledgment message is sent back to the host. Once both the host and the PCIe endpoint are configured with the key, a secure communication channel is established. From this point, all data transmitted over the PCIe link is encrypted to maintain confidentiality and integrity. IDE_KM plays a crucial role in distributing keys in a secure manner and maintaining encryption and integrity for PCIe transactions. This key programming flow ensures that a secure communication channel is established between the host and the PCIe device. Hence, the Randomized key approach ensures that the encryption does not repeat patterns. 3. Randomization PHE: Partial Header Encryption (PHE) is an additional mechanism added to Integrity and Data Encryption (IDE) in PCIe 6.0. PHE validation using a variety of traffic; incorporating randomization in APIs provided for validating PHE feature can add more robust Encryption to the data. Partial Header Encryption in Integrity and Data Encryption for PCIe has more detailed information on this. Figure 2: High-Level Flow for Partial Header Encryption 4. Randomization on IDE Address Association Register values: IDE Address Association Register 1/2/3 are supposed to be configured considering the memory address range of IDE partner ports. The fields of IDE address registers are split multiple values such as Memory Base Lower, Memory Limit Lower, Memory Base Upper, and Memory Limit Upper. IDE implementation can have multiple register blocks considering addresses with 32 or 64, different registers sizes, 0-255 selective streams, 0-15 address blocks, etc. This Randomization verification can help verify all the corner cases. Please refer to Figure 2. Figure 3: IDE Address Association Register 5. Random Faults During Encryption: Injecting random faults (e.g., dropped packets or timing mismatches) ensures the system can handle disruptions and prevent data leakage. Challenges of IDE Randomization and its Solution Randomization introduces a vast number of scenarios, making it computationally intensive to simulate every possibility. Constrained randomization limits random inputs to valid ranges while still covering edge cases. Again, using coverage-driven verification to ensure critical scenarios are tested without excessive redundancy. Verifying encrypted data with random inputs increases complexity. Encryption masks data, making it hard to verify outputs without compromising security. Here we can implement various IDE checks on the IDE callback to analyze encrypted traffic without decrypting it. Randomization can trigger unexpected failures, which are often difficult to reproduce. By using seed-based randomization, a specific seed generates a repeatable random sequence. This helps in reproducing and analyzing the behavior more precisely. Conclusion Randomization is a powerful technique in PCIe verification, ensuring robust validation of both data integrity and data encryption. It helps us to uncover subtle bugs and edge cases that a non-randomized testing might miss. In Cadence PCIe VIP, we support full-fledged IDE Verification with rigorous randomized verification that ensures data integrity. Robust and reliable encryption mechanisms ensure secure and efficient data communication. However, randomization also brings various challenges, and to overcome them we adopt a combination of constrained randomization, seed-based testing, and coverage-driven verification. As PCIe continues to evolve with higher speeds and focuses on high security demands, our Cadence PCIe VIP ensures it is in line with industry demand and verify high-performance systems that safeguard data in real-world environments with excellence. For more information, you can refer to Verification of Integrity and Data Encryption(IDE) for PCIe Devices and Industry's First Adopted VIP for PCIe 7.0 . More Information: For more info on how Cadence PCIe Verification IP and TripleCheck VIP enables users to confidently verify IDE, see our VIP for PCI Express , VIP for Compute Express Link for and TripleCheck for PCI Express For more information on PCIe in general, and on the various PCI standards, see the PCI-SIG website .




cr

USB crash issue in Linux 4.14.62

Hi ,

  FIrst of all , I hope I have posted my query in the right place . I am expecting software support/suggestions for the below issue.

   I am working on LTE which use USB interface and the Host Controller is USB 2.0 . The BSP is from NXP which supports Cadence USB 3.0 Host controller and with USB 3.0 supported cadence driver.NXP had used the   USB 3.0 host controller for USB type C based device.

  Cadence USB 3.0 based device driver seems to be backward compatible for USB 2.0 host controller .Since basic LTE functionalities seems to be working fine I continued to use the same driver in Linux 4.14.62 

  But I am facing a kernel warning of unhandled interrupt and the crash log points to cdns_irq function as shown below  The crash/kerenel warning is very random and not occuring all the time.

 

.691533] irq 36: nobody cared (try booting with the "irqpoll" option)

[ 1.698242] CPU: 0 PID: 87 Comm: kworker/0:1 Not tainted 4.9.88 #24

[ 1.704509] Hardware name: Freescale i.MX8QXP MEK (DT)

[ 1.709659] Workqueue: pm pm_runtime_work

[ 1.713675] Call trace:

[ 1.716123] [<ffff0000080897d0>] dump_backtrace+0x0/0x1b0

[ 1.721523] [<ffff000008089994>] show_stack+0x14/0x20

[ 1.726582] [<ffff0000083daff0>] dump_stack+0x94/0xb4

[ 1.731638] [<ffff00000810f064>] __report_bad_irq+0x34/0xf0

[ 1.737212] [<ffff00000810f4ec>] note_interrupt+0x2e4/0x330

[ 1.742790] [<ffff00000810c594>] handle_irq_event_percpu+0x44/0x58

[ 1.748974] [<ffff00000810c5f0>] handle_irq_event+0x48/0x78

[ 1.754553] [<ffff0000081100a8>] handle_fasteoi_irq+0xc0/0x1b0

[ 1.760390] [<ffff00000810b584>] generic_handle_irq+0x24/0x38

[ 1.766141] [<ffff00000810bbe4>] __handle_domain_irq+0x5c/0xb8

[ 1.771979] [<ffff000008081798>] gic_handle_irq+0x70/0x15c

1.807416] 7a40: 00000000000002ba ffff80002645bf00 00000000fa83b2da 0000000001fe116e

[ 1.815252] 7a60: ffff000088bf7c47 ffffffffffffffff 00000000000003f8 ffff0000085c47b8

[ 1.823088] 7a80: 0000000000000010 ffff800026484600 0000000000000001 ffff8000266e9718

[ 1.830925] 7aa0: ffff00000b8b0008 ffff800026784280 ffff00000b8b000c ffff00000b8d8018

[ 1.838760] 7ac0: 0000000000000001 ffff000008b76000 0000000000000000 ffff800026497b20

[ 1.846596] 7ae0: ffff00000810bd24 ffff800026497b20 ffff000008851d18 0000000000000145

[ 1.854433] 7b00: ffff000008b8d6c0 ffff0000081102d8 ffffffffffffffff ffff00000810dda8

[ 1.862268] [<ffff000008082eec>] el1_irq+0xac/0x120

[ 1.867155] [<ffff000008851d18>] _raw_spin_unlock_irqrestore+0x18/0x48

[ 1.873684] [<ffff00000810bd24>] __irq_put_desc_unlock+0x1c/0x48

[ 1.879695] [<ffff00000810de10>] enable_irq+0x48/0x70

[ 1.884756] [<ffff0000085ba8f8>] cdns3_enter_suspend+0x1f0/0x440

[ 1.890764] [<ffff0000085baca0>] cdns3_runtime_suspend+0x48/0x88

[ 1.896776] [<ffff0000084cf398>] pm_generic_runtime_suspend+0x28/0x40

[ 1.903223] [<ffff0000084dc3e8>] genpd_runtime_suspend+0x88/0x1d8

[ 1.909320] [<ffff0000084d0e08>] __rpm_callback+0x70/0x98

[ 1.914724] [<ffff0000084d0e50>] rpm_callback+0x20/0x88

[ 1.919954] [<ffff0000084d1b2c>] rpm_suspend+0xf4/0x4c8

[ 1.925184] [<ffff0000084d20fc>] rpm_idle+0x124/0x168

[ 1.930240] [<ffff0000084d26c0>] pm_runtime_work+0xa0/0xb8

[ 1.935732] [<ffff0000080dc1dc>] process_one_work+0x1dc/0x380

[ 1.941481] [<ffff0000080dc3c8>] worker_thread+0x48/0x4d0

[ 1.946885] [<ffff0000080e2408>] kthread+0xf8/0x100
[ 1.957080] handlers:

[ 1.959350] [<ffff0000085ba668>] cdns3_irq

[ 1.963449] Disabling IRQ #36

 Kindly provide a solution to solve this issue.

Thanks & Regards,

Anjali




cr

For this Brave New World of cricket, we have IPL and England to thank

This is the 24th installment of The Rationalist, my column for the Times of India.

Back in the last decade, I was a cricket journalist for a few years. Then, around 12 years ago, I quit. I was jaded as hell. Every game seemed like déjà vu, nothing new, just another round on the treadmill. Although I would remember her fondly, I thought me and cricket were done.

And then I fell in love again. Cricket has changed in the last few years in glorious ways. There have been new ways of thinking about the game. There have been new ways of playing the game. Every season, new kinds of drama form, new nuances spring up into sight. This is true even of what had once seemed the dullest form of the game, one-day cricket. We are entering into a brave new world, and the team leading us there is England. No matter what happens in the World Cup final today – a single game involves a huge amount of luck – this England side are extraordinary. They are the bridge between eras, leading us into a Golden Age of Cricket.

I know that sounds hyperbolic, so let me stun you further by saying that I give the IPL credit for this. And now, having woken up you up with such a jolt on this lovely Sunday morning, let me explain.

Twenty20 cricket changed the game in two fundamental ways. Both ended up changing one-day cricket. The first was strategy.

When the first T20 games took place, teams applied an ODI template to innings-building: pinch-hit, build, slog. But this was not an optimal approach. In ODIs, teams have 11 players over 50 overs. In T20s, they have 11 players over 20 overs. The equation between resources and constraints is different. This means that the cost of a wicket goes down, and the cost of a dot ball goes up. Critically, it means that the value of aggression rises. A team need not follow the ODI template. In some instances, attacking for all 20 overs – or as I call it, ‘frontloading’ – may be optimal.

West Indies won the T20 World Cup in 2016 by doing just this, and England played similarly. And some sides began to realise was that they had been underestimating the value of aggression in one-day cricket as well.

The second fundamental way in which T20 cricket changed cricket was in terms of skills. The IPL and other leagues brought big money into the game. This changed incentives for budding cricketers. Relatively few people break into Test or ODI cricket, and play for their countries. A much wider pool can aspire to play T20 cricket – which also provides much more money. So it makes sense to spend the hundreds of hours you are in the nets honing T20 skills rather than Test match skills. Go to any nets practice, and you will find many more kids practising innovative aggressive strokes than playing the forward defensive.

As a result, batsmen today have a wider array of attacking strokes than earlier generations. Because every run counts more in T20 cricket, the standard of fielding has also shot up. And bowlers have also reacted to this by expanding their arsenal of tricks. Everyone has had to lift their game.

In one-day cricket, thus, two things have happened. One, there is better strategic understanding about the value of aggression. Two, batsmen are better equipped to act on the aggressive imperative. The game has continued to evolve.

Bowlers have reacted to this with greater aggression on their part, and this ongoing dialogue has been fascinating. The cricket writer Gideon Haigh once told me on my podcast that the 2015 World Cup featured a battle between T20 batting and Test match bowling.

This England team is the high watermark so far. Their aggression does not come from slogging. They bat with a combination of intent and skills that allows them to coast at 6-an-over, without needing to take too many risks. In normal conditions, thus, they can coast to 300 – any hitting they do beyond that is the bonus that takes them to 350 or 400. It’s a whole new level, illustrated by the fact that at one point a few days ago, they had seven consecutive scores of 300 to their name. Look at their scores over the last few years, in fact, and it is clear that this is the greatest batting side in the history of one-day cricket – by a margin.

There have been stumbles in this World Cup, but in the bigger picture, those are outliers. If England have a bad day in the final and New Zealand play their A-game, England might even lose today. But if Captain Morgan’s men play their A-game, they will coast to victory. New Zealand does not have those gears. No other team in the world does – for now.

But one day, they will all have to learn to play like this.

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.




cr

Start Your Engines: Create and Insert Connect Modules for Mixed-Signal Verification

Read this blog to know how you can easily create and insert connect modules using Spectre AMS Designer with the Verilog-AMS standard language defined by Accellera. (read more)




cr

Allegro 17.4 always reports new files as created in 17.2

Hello. I am using Cadence 17.4 tools. When I open a package symbol (.dra) or board file (.brd) in Allegro that was created in an older version of the tool I get a message like this one (as expected):

"The design created using release 17.2 will be updated for compatibility with the current software..."


If I create a symbol or board file from scratch in the 17.4 tool then open it later, I get the same message. (always referring to version 17.2 which is the previous version I was using here).

So far this has not caused me any problems, but I would like to understand why it is doing this in case I have something setup incorrectly.

I only have version 17.4 installed. I am not exporting to a downrev version when I save (i.e. not using File->Export->Downrev design…) and in User Preferences->Drawing I don’t have anything selected for database_compatibility_mode. What else might I check?

FYI here is the tool version information that I see after selecting Help->About Symbol:

OrCAD PCB Designer Standard 17.4-2019 S012 [10/26/2020] Windows SPB 64-bit Edition


Thanks -Jason