al

How to allow hand-made waveform plot into Viva from Assembler?

Hi! I've made some 1-point waveform "markers" that I want to overlay in my plots to aid visualization (with the added advantage, w.r.t. normal Viva markers, that they update location automatically upon refreshing simulation data).

For example, the plot below shows an spectrum along with two of these markers, which I create with the function "singlePointWave", and the Assembler output definitions also as shown below.

The problem is: as currently created and defined, Assembler is unable to plot these elements. I can send their expressions to the calculator and plotting works from there, BUT ONLY after first enabling the "Allow Any Units" in the target Viva subwindow.

Thus, I suspect Assembler is failing to plot my markers because they "lack" other information like axes units and so on. How could I add whatever is missing, so that these markers can plot automatically from Assembler?

Thanks in advance for any help!

Jorge.

P.S. I also don't know why, but nothing works without those "ymax()" in the output definitions--I suspect they are somehow converting the arguments to the right data type expected by singlePointWave(). Ideas how to fix that are also welcome! ^^

procedure( singlePointWave(xVal yVal)
    let( (xVect yVect wave)
        xVect = drCreateVec('double list(xVal));
        yVect = drCreateVec('double list(yVal));
        wave = drCreateWaveform(xVect yVect);
    );
);




al

IC 23.1 installation configuration failure on RHEL 9

I am trying to install IC231 on RHEL 8 using installscape, however configuring keeps failing.

I tried to run the configuration file manually as suggested in one of the previous posts and it gives me following errors:

sh batch_configure.sh
/home/rs/cadence/installs/IC231/install/tmp/slconfig.sh: line 165: xterm: command not found
cat: ncvhdl23.03-d103lnx86_101124125631.stat: No such file or directory
rm: cannot remove 'ncvhdl23.03-d103lnx86_101124125631.stat': No such file or directory
/home/rs/cadence/installs/IC231/install/tmp/slconfig.sh: line 165: xterm: command not found
cat: ncvhdl64b23.03-d103lnx86_101124125631.stat: No such file or directory
rm: cannot remove 'ncvhdl64b23.03-d103lnx86_101124125631.stat': No such file or directory
/home/rs/cadence/installs/IC231/install/tmp/slconfig.sh: line 165: xterm: command not found
cat: oaRedist22.61-p003lnx86_101124125631.stat: No such file or directory
rm: cannot remove 'oaRedist22.61-p003lnx86_101124125631.stat': No such file or directory
/home/rs/cadence/installs/IC231/install/tmp/slconfig.sh: line 165: xterm: command not found
cat: amsEnv64b23.10-p043lnx86_101124125631.stat: No such file or directory
rm: cannot remove 'amsEnv64b23.10-p043lnx86_101124125631.stat': No such file or directory
/home/rs/cadence/installs/IC231/install/tmp/slconfig.sh: line 165: xterm: command not found
cat: ihdl64b23.10-p043lnx86_101124125631.stat: No such file or directory
rm: cannot remove 'ihdl64b23.10-p043lnx86_101124125631.stat': No such file or directory

I am not very well versed with Linux at the moment but trying. Could any one suggest something or point to what is missing?




al

Is there a skill command for "Assign Layout Instance terminals"?

Is there a skill command for "Assign Layout Instance terminals", this form appears when i click on define device correspondence and Bind the devices.

Also,

Problem Statement : i have a schematic with a couple of transistor symbols and and i alos have a corresponding layout view with respective layout transistors but they all are inside a pCell(created by me) i.e layout transistor called inside a custom Pcell. Now i have multiple symbols in schematic view and a single instance(pCell) in layout view. 
Is there a way how i can bind these schematic symbols with layout symbols inside the pCell(custom)? Even if i have to use cph commands i'm fine with it. need help here.

The idea here is to establish XL connectivity between the schematic symbols and corresponding layout transistors(inside the pCell).

Thanks,

Shankar




al

Coordinates(bBoxes) of all the shapes(layers) in a layout view

Hello Community,

Is there any simple way how i can get the coordinates of all the shapes in a layout view?

Currently i'm flattening the layout, getting all the lpps from CV and using setof to get all the shapes of a layer and looping through them to get the coordinates.

Is there a way to do it without having to flatten the layout view and shapes merged or any other elegant way to do it if we flatten it?

Also, dbWriteSkill doesn't give output how i desired

Thanks,

Shankar




al

BER and EVM calculation

Hi,

I hope you are doing well.

I have designed and simulated a PA system in Cadence using high-level blocks, which include both ideal components and some defined with Verilog-A. My goal is to calculate the Bit Error Rate (BER) and Error Vector Magnitude (EVM) in the system. I am using an LTE source from RFLib, and everything functions correctly in the transient simulation.

To calculate these parameters, I intended to use envelope simulation. However, when I attempt to run the envelope simulation, I encounter convergence errors, which prevent it from working as expected.

Given this issue, I believe I need to work with transient data instead. Could you please advise on how to approach this in Cadence without exporting the data to MATLAB?

Thank you for your assistance.




al

How to create draw region button like the one used in the Area and Density calculator

Hello,

I would like to create a button for my form that prompts the user to click on a cellview and draw a rectangle bounding box, exactly like the one used in the Area and Density Calculator. Can someone please help me with this?

Thanks!

Beto




al

μWaveRiders: Thermal Analysis for RF Power Applications

Thermal analysis with the Cadence Celsius Thermal Solver integrated within the AWR Microwave Office circuit simulator gives designers an understanding of device operating temperatures related to power dissipation. That temperature information can be introduced into an electrothermal model to predict the impact on RF performance.(read more)




al

μWaveRiders: Scoring Goals with the Latest AWR Design Environment Optimizer

AWR V22.1 software introduces the Pointer-Hybrid optimization method which uses a combination of optimization methods, switching back and forth between methods to efficiently find the lowest optimization error function cost. The optimization algorithm automatically determines when to switch to a different optimization method, making this a superior method over manual selection of algorithms. This method is particularly robust in regards to finding the global minima without getting stuck in a local minima well.(read more)




al

Unlock Your RF Engineering Potential with a Cadence AWR Free Academic Trial!

Are you ready to revolutionize your RF design experience? Look no further! Cadence AWR software is your gateway to mastering the intricacies of Radio Frequency (RF) circuit design, and now, you can explore its power with our exclusive Free Academic T...(read more)




al

Constraining some nets to route through a specific metal layer, and changing some pin/cell placements and wire directions in Cadence Innovus.

Hello All:

I am looking for help on the following, as I am new to Cadence tools [I have to use Cadence Innovus for Physical Design after Logic Synthesis using Synopsys Design Compiler, using Nangate 45 nm Open Cell Library]: while using Cadence Innovus, I would need to select a few specific nets to be routed through a specific metal layer. How can I do this on Innovus [are there any command(s)]? Also, would writing and sourcing a .tcl script [containing the command(s)] on the Innovus terminal after the Placement Stage of Physical Design be fine for this?

Secondly, is there a way in Innovus to manipulate layout components, such as changing some pin placements, wire directions (say for example, wire direction changed to facing east from west, etc.) or moving specific closely placed cells around (without violating timing constraints of course) using any command(s)/.tcl script? If so, would pin placement changes and constraining some closely placed cells to be moved apart be done after Floorplanning/Powerplanning (that is, prior to Placement) and the wire direction changes be done after Routing? 

While making the necessary changes, could I use the usual Innovus commands to perform Physical Design of the remaining nets/wires/pins/cells, etc., or would anything need modification for the remaining components as well?

I would finally need to dump the entire design containing all of this in a .def file.

I tried looking up but could only find matter on Virtuoso and SKILL scripting, but I'd be using Innovus GUI/terminal with Nangate 45 nm Open Cell Library. I know this is a lot, but I would greatly appreciate your help. Thanks in advance.

Riya




al

read from text file with two values and represent that as voltage signals on two different port a and b

i want to read from text file two values  on two ports , i wrote  that  code, and i have that error that shown in the image below . and also the data in text file is shown as screenshot

 


module read_file (a,b);

electrical a,b;
integer in_file_0,data_value, valid, count0,int_value;


analog begin
@(initial_step) begin
in_file_0 = $fopen("/home/hh1667/ee610/my_library/read_file/data2.txt","r");

valid = $fscanf (in_file_0, "%b,%b" ,int_value,count0);
end

V(a) <+ int_value;
V(b) <+ count0;

end

endmodule




al

In Simvision, how do I change the waveform font size of the signal names?

Hi Cadence, 

I use simvision 20.09-s007 but my computer screen resolution is very high. As a result, the texts are too small. 

In ~/.simvision/Xdefaults I changed that number to 16, from 12. But the signal names in the waveform traces don't reflect the change. 

Simvision*Font: -adobe-helvetica-medium-r-normal--16-*-*-*-*-*-*-*

Other .font changes seem to reflect on the simvision correctly, except the signal names. 

How do I fix that? I dont mind a single variable to change all the texts fonts to 16. 

Thank you!

PS: I found the answer with another post. I change Preference/Waveform/Display/Signal Height. 




al

Conformal LEC can't finish at analyze abort step. How do I proceed?

Hi Cadence & forumers, 

I am running a conformal LEC with a flattened netlist against RTL. 

The run hang for 5 days at the "analyze abort" step which is automatically launched by the compare. 

The netlist is flattened at some levels so hierarchical flow which I tried didn't help much. The flattened/highly optimized netlist is from customer and the ultimate goal. How shall I proceed now? 

On the a side note, a test run with a hierarchical netlist from a simple DC "compile -map_effort medium" command finished after 1 day or so. 

Thank you! 

// Command: vpx compare -verbose -ABORT_Print -NONEQ_Print -TIMEstamp
// Starting multithreaded comparison ...
Comparing 241112 points in parallel.

// Multithreading Overhead: 38% Gates: 8501606/6168138
// Multithreaded processing completed.
================================================================================
Compared points PO DFF DLAT BBOX CUT Total
--------------------------------------------------------------------------------
Equivalent 1025 241638 30 75 21 242789
--------------------------------------------------------------------------------
Abort 0 124 0 0 0 124
================================================================================
Compare results of instance/output/pin equivalences and/or sequential merge
================================================================================
Compared points DFF Total
--------------------------------------------------------------------------------
Equivalent 204 204
================================================================================
// Warning: 512 DFFs/DLATs have 1 disabled clock port: skipped data cone comparison
// Resolving aborts by analyze abort...




al

Merge several worklibs

Hi,

I find there is a similar question 10 years ago and the answer is out of date, so I come to ask again.

I have compiled 2 different blocks in 2 different paths, using basic xrun -f xxxx.f, generated 2 xcelium.d folder.

Then I have to compile another block based on these 2, how can I link these 2 generated libraries while compiling the 3rd one?

Thanks




al

Regarding the loading of waveform signals in the waveform windown using the tcl command

Hello,

I am trying to load some of the signals of the design saved in the signals.svwf to the waveform windown via the tcl file, I am using the following commands but nothing works, Can you please help 

 -submit waveform loadsignals -using "Waveform 2" FB1.svwf but it gives me the below error

-submit waveform new -reuse -name Waveforms




al

Conformal CEC checking

Below is showing my Master.v

********************************************************************************************************************************************************************************************************************

///////ALU
module ALU (
    input [31:0] A,B,
    input[3:0] alu_control,
    output reg [31:0] alu_result,
    output reg zero_flag
);
    always @(*)
    begin
        // Operating based on control input
        case(alu_control)

        4'b0001: alu_result = A+B;
        4'b0010: alu_result = A-B;
        4'b0011: alu_result = A*B;
        4'b0100: alu_result = A|B;
        4'b0101: alu_result = A&B;
        4'b0110: alu_result = A^B;
        4'b0111: alu_result = ~B;
        4'b1000: alu_result = A<<B;
        4'b1001: alu_result = A>>B;
        4'b1010: begin
            if(A<B)
            alu_result = 1;
            else
            alu_result = 0;
        end
        default: alu_result = A+B;

        endcase

        // Setting Zero_flag if ALU_result is zero
        if (alu_result)
            zero_flag = 1'b1;
        else
            zero_flag = 1'b0;   
    end
endmodule


/////CONTROL UNIT
/*
Control unit controls takes opcode, funct7, funct3 of the instruction code to determine
and control regwrite in IFU, alu control in ALU to execute proper instruction
*/
/*
Control unit controls takes opcode, funct7, funct3 of the instruction code to determine
and control regwrite in IFU, alu control in ALU to execute proper instruction
*/
module CONTROL(
    input [4:0] opcode,
    output reg [3:0] alu_control,
    output reg regwrite_control,memread_control,memwrite_control
);
    always @(opcode)
    begin
       case(opcode)
        5'b00001: begin alu_control=4'b0001;  //add
        regwrite_control=1; memread_control=0; memwrite_control=0;
        end
        5'b00010: begin alu_control=4'b0010;  ///sub
        regwrite_control=1; memread_control=0; memwrite_control=0;
        end
        5'b00011: begin alu_control=4'b0011;  //mul
        regwrite_control=0; memread_control=0; memwrite_control=1;
        end
        5'b00100: begin alu_control=4'b0100;  ///OR
        regwrite_control=0; memread_control=0; memwrite_control=1;
        end
        5'b00101: begin alu_control=4'b0101;  ///AND
        regwrite_control=1; memread_control=0; memwrite_control=0;
        end
        5'b00110: begin alu_control=4'b0110;  ///XOR
        regwrite_control=0; memread_control=0; memwrite_control=1;
        end
        5'b00111: begin alu_control=4'b0111;  ///NOT
        regwrite_control=0; memread_control=0; memwrite_control=1;
        end
        5'b01000: begin alu_control=4'b1000;  //SL
        regwrite_control=1; memread_control=1; memwrite_control=0;
        end
        5'b11001: begin alu_control=4'b1001;  //SR
        regwrite_control=1; memread_control=1; memwrite_control=0;
        end
        5'b01010: begin alu_control=4'b1010;  //COMPARE
        regwrite_control=1; memread_control=1; memwrite_control=0;
        end
        //5'b11010: begin ALU_control=4'b0000;  //SW
        //regwrite_control=1; memread_control=0; memwrite_control=0;
        //end
        //5'b01010: begin ALU_control=4'bxxxx;  //LW
        //regwrite_control=0; memread_control=0; memwrite_control=1;
        //end
        default : begin alu_control = 4'b0001;
        regwrite_control=1; memread_control=0; memwrite_control=0;
        end
        endcase  
    end
endmodule



//////DATA MEMORY
module Data_Mem(
input clock, rd_mem_enable, wr_mem_enable,
input [11:0] address,
input [31:0] datawrite_to_mem,
output reg [31:0] dataread_from_mem );

reg [31:0] Data_Memory[8:0];

initial begin
    Data_Memory[0] = 32'hFFFFFFFF;
    Data_Memory[1] = 32'h00000001;
    Data_Memory[2] = 32'h00000005;
    Data_Memory[3] = 32'h00000003;
    Data_Memory[4] = 32'h00000004;
    Data_Memory[5] = 32'h00000000;
    Data_Memory[6] = 32'hFFFFFFFF;
    Data_Memory[7] = 32'h00000000;
    //Data_Memory[8] = 32'h00000008;
    //Data_Memory[9] = 32'h00000009;
    //Data_Memory[10] = 32'h0000000A;
    //Data_Memory[11] = 32'h0000000B;
    //Data_Memory[12] = 32'h0000000C;
    //Data_Memory[13] = 32'h0000000D;
    //Data_Memory[14] = 32'h0000000E;
    //Data_Memory[15] = 32'h0000000F;
    //Data_Memory[16] = 32'h00000010;
    //Data_Memory[17] = 32'h00000011;
    //Data_Memory[18] = 32'h00000012;
    //Data_Memory[19] = 32'h00000013;
    //Data_Memory[20] = 32'h00000014;
    //Data_Memory[21] = 32'h00000015;
    //Data_Memory[22] = 32'h00000016;
    //Data_Memory[23] = 32'h00000017;
    //Data_Memory[24] = 32'h00000018;
    //Data_Memory[25] = 32'h00000019;
    //Data_Memory[26] = 32'h0000001A;
    //Data_Memory[27] = 32'h0000001B;
    //Data_Memory[28] = 32'h0000001C;
    //Data_Memory[29] = 32'h0000001D;
    //Data_Memory[30] = 32'h0000001E;
    Data_Memory[31] = 32'h0000001F;
       
    end
    always@(posedge clock) begin
       if(wr_mem_enable) begin
            Data_Memory[address] <= datawrite_to_mem;
       end
       else if(rd_mem_enable) begin
               dataread_from_mem <= Data_Memory[address];
       end
       else begin
               dataread_from_mem <= 32'h00000000;
       end
    end
endmodule   



/////INST MEM
/*

*/
module INST_MEM(
    input [31:0] PC,
    input reset,
    output [31:0] Instruction_Code
);
    reg [7:0] Memory [43:0]; // Byte addressable memory with 32 locations

    
    assign Instruction_Code = {Memory[PC+3],Memory[PC+2],Memory[PC+1],Memory[PC]};

    
    
    initial begin
            // Setting 32-bit instruction: add t1, s0,s1 => 0x00940333
            Memory[3] = 8'b0000_0000;
            Memory[2] = 8'b0000_0001;
            Memory[1] = 8'b0111_1100;
            Memory[0] = 8'b0000_0001;
            // Setting 32-bit instruction: sub t2, s2, s3 => 0x413903b3
            Memory[7] = 8'b0000_0000;
            Memory[6] = 8'b0000_0110;
            Memory[5] = 8'b1000_1111;
            Memory[4] = 8'b1110_0010;
            // Setting 32-bit instruction: mul t0, s4, s5 => 0x035a02b3
            Memory[11] = 8'b0000_0000;
            Memory[10] = 8'b0000_0101;
            Memory[9] = 8'b0111_1100;
            Memory[8] = 8'b0000_0011;
            // Setting 32-bit instruction: or t3, s6, s7 => 0x017b4e33
            Memory[15] = 8'b1111_1111;
            Memory[14] = 8'b1111_0100;
            Memory[13] = 8'b1010_0000;
            Memory[12] = 8'b1010_0100;
            // Setting 32-bit instruction: and
            Memory[19] = 8'b0000_0000;
            Memory[18] = 8'b0010_1001;
            Memory[17] = 8'b0001_1101;
            Memory[16] = 8'b0010_0101;
            // Setting 32-bit instruction: xor
            Memory[23] = 8'b0000_0000;
            Memory[22] = 8'b0001_1000;
            Memory[21] = 8'b0000_1101;
            Memory[20] = 8'b0110_0110;
            // Setting 32-bit instruction: not
            Memory[27] = 8'b0000_0000;
            Memory[26] = 8'b0010_1001;
            Memory[25] = 8'b0011_1101;
            Memory[24] = 8'b1100_0111;
            // Setting 32-bit instruction: shift left
            Memory[31] = 8'b0000_0000;
            Memory[30] = 8'b0101_0111;
            Memory[29] = 8'b1100_0110;
            Memory[28] = 8'b0000_1000;
            // Setting 32-bit instruction: shift right
            Memory[35] = 8'b0000_0000;
            Memory[34] = 8'b0110_1010;
            Memory[33] = 8'b1101_0010;
            Memory[32] = 8'b0111_1001;
            /// Setting 32-bit instruction: Campare
            Memory[39] = 8'b0000_0000;
            Memory[38] = 8'b0111_1010;
            Memory[37] = 8'b1101_0010;
            Memory[36] = 8'b0110_1010;
            /// Setting 32-bit instruction:
            Memory[43] = 8'b0000_0000;
            Memory[42] = 8'b0111_0111;
            Memory[41] = 8'b1101_0010;
            Memory[40] = 8'b0111_0010;
        end
   

endmodule

//IFU
/*
The instruction fetch unit has clock and reset pins as input and 32-bit instruction code as output.
Internally the block has Instruction Memory, Program Counter(P.C) and an adder to increment counter by 4,
on every positive clock edge.
*/
module IFU(
    input clock,reset,
    output [31:0] Instruction_Code
);
reg [31:0] PC = 32'b0;  // 32-bit program counter is initialized to zero

    
    always @(posedge clock, posedge reset)
    begin
        if(reset == 1)  //If reset is one, clear the program counter
        PC <= 0;
        else
        PC <= PC+4;   // Increment program counter on positive clock edge
    end
    // Initializing the instruction memory block
    INST_MEM instr_mem(.PC(PC),.reset(reset),.Instruction_Code(Instruction_Code));

endmodule


///MUX

module Mux_2X1 (
    input mem_rd_select, // rd_mem_enable
    input wire [31:0] dataread_from_mem, regdata2,

    output reg [31:0] mux_out
);

always @(mem_rd_select or dataread_from_mem or regdata2) begin
    if (mem_rd_select == 1)
        mux_out <= dataread_from_mem ;
    else
        mux_out <= regdata2;
    end
endmodule

//DFlipFlop
module DFlipFlop(D,clock,Q);
input D; // Data input
input clock; // clock input
output reg Q; // output Q
always @(posedge clock)
begin
 Q <= D;
end
endmodule

///DATA path


module DATAPATH(
    input [4:0]Read_reg_add1,
    input [4:0]Read_reg_add2,
    input [4:0]Reg_write_add,
    input [3:0]Alu_control,
    input [11:0]Address,
    input Wr_reg_enable,Wr_mem_enable,Rd_mem_enable,
    input clock,
    input reset,
    output OUTPUT
    );

    // Declaring internal wires that carry data
    wire zero_flag;
    wire [31:0]Dataread_from_mem;
    wire [31:0]read_data1;
    wire [31:0]read_data2;
    wire [31:0]Mux_out;
    wire [31:0]Alu_result;
    //wire [31:0]datawrite_to_reg;

    // Instantiating the register file
    REG_FILE reg_file_module(.reg_read_add1(Read_reg_add1),.reg_read_add2(Read_reg_add2),.reg_write_add(Reg_write_add),.datawrite_to_reg(Alu_result),.read_data1(read_data1),.read_data2(read_data2),.wr_reg_enable(Wr_reg_enable),.clock(clock),.reset(reset));

    // Instanting ALU
    ALU alu_module(.A(read_data1), .B(Mux_out), .alu_control(Alu_control), .alu_result(Alu_result), .zero_flag(zero_flag));
    
    //Mux
    Mux_2X1 mux(.mem_rd_select(Rd_mem_enable),.dataread_from_mem(Dataread_from_mem),.regdata2(read_data2),.mux_out(Mux_out));

    //Data Memory
    Data_Mem DM(.clock(clock),.rd_mem_enable(Rd_mem_enable),.wr_mem_enable(Wr_mem_enable),.address(Address),.datawrite_to_mem(Alu_result),.dataread_from_mem(Dataread_from_mem));
    
    // Dflipflop
    DFlipFlop DF (.D(zero_flag), .Q(OUTPUT),.clock(clock));
endmodule


/*
A register file can read two registers and write in to one register.
The RISC V register file contains total of 32 registers each of size 32-bit.
Hence 5-bits are used to specify the register numbers that are to be read or written.
*/

/*
Register Read: Register file always outputs the contents of the register corresponding to read register numbers specified.
Reading a register is not dependent on any other signals.

Register Write: Register writes are controlled by a control signal RegWrite.  
Additionally the register file has a clock signal.
The write should happen if RegWrite signal is made 1 and if there is positive edge of clock.
*/
module REG_FILE(
    input [4:0] reg_read_add1,
    input [4:0] reg_read_add2,
    input [4:0] reg_write_add,
    input [31:0] datawrite_to_reg,
    output [31:0] read_data1,
    output [31:0] read_data2,
    input wr_reg_enable,
    input clock,
    input reset
);

    reg [31:0] reg_memory [31:0]; // 32 memory locations each 32 bits wide
    
    initial begin
        reg_memory[0] = 32'h00000000;
        reg_memory[1] = 32'hFFFFFFFF;
        reg_memory[2] = 32'h00000002;
        reg_memory[3] = 32'hFFFFFFFF;
        reg_memory[4] = 32'h00000004;
        reg_memory[5] = 32'h01010101;
        reg_memory[6] = 32'h00000006;
        reg_memory[7] = 32'h00000000;
        reg_memory[8] = 32'h10101010;
        reg_memory[9] = 32'h00000009;
        reg_memory[10] = 32'h0000000A;
        reg_memory[11] = 32'h0000000B;
        reg_memory[12] = 32'h0000000C;
        reg_memory[13] = 32'h0000000D;
        reg_memory[14] = 32'h0000000E;
        reg_memory[15] = 32'h0000000F;
        reg_memory[16] = 32'h00000010;
        reg_memory[17] = 32'h00000011;
        reg_memory[18] = 32'h00000012;
        reg_memory[19] = 32'h00000013;
        reg_memory[20] = 32'h00000014;
        reg_memory[21] = 32'h00000015;
        //reg_memory[22] = 32'h00000016;
        //reg_memory[23] = 32'h00000017;
        //reg_memory[24] = 32'h00000018;
        //reg_memory[25] = 32'h00000019;
        //reg_memory[26] = 32'h0000001A;
        //reg_memory[27] = 32'h0000001B;
        //reg_memory[28] = 32'h0000001C;
        //reg_memory[29] = 32'h0000001D;
        //reg_memory[30] = 32'h0000001E;
        reg_memory[31] = 32'hFFFFFFFF;
    end

    // The register file will always output the vaules corresponding to read register numbers
    // It is independent of any other signal
    assign read_data1 = reg_memory[reg_read_add1];
    assign read_data2 = reg_memory[reg_read_add2];

    // If clock edge is positive and regwrite is 1, we write data to specified register
    always @(posedge clock)
    begin
        if (wr_reg_enable) begin
            reg_memory[reg_write_add] = datawrite_to_reg;
        end     
    else
        reg_memory[reg_write_add] = 32'h00000000;
    end
endmodule


/////PROCESSOR


module PROCESSOR(
    input clock,
    input reset,
    output Output
);

    wire [31:0] instruction_Code;
    wire [3:0] ALu_control;
    wire WR_reg_enable;
    wire WR_mem_enable;
    wire RD_mem_enable;


    IFU IFU_module(.clock(clock), .reset(reset), .Instruction_Code(instruction_Code));
    
    CONTROL control_module(.opcode(instruction_Code[4:0]),.alu_control(ALu_control),.regwrite_control(WR_reg_enable),.memread_control(RD_mem_enable),.memwrite_control(WR_mem_enable));
    
    DATAPATH datapath_module(.Wr_mem_enable(WR_mem_enable),.Rd_mem_enable(RD_mem_enable),.Read_reg_add1(instruction_Code[9:5]),.Read_reg_add2(instruction_Code[14:10]),.Reg_write_add(instruction_Code[19:15]),.Address(instruction_Code[31:20]),.Alu_control(ALu_control),.Wr_reg_enable(WR_reg_enable), .clock(clock), .reset(reset), .OUTPUT(Output));

endmodule

**********************************************************************************************************************************************************

Below is my Synthesis.tcl file for genus synthesis

********************

set_attribute lib_search_path "/home/sameer23185/Desktop/VDF_PROJECT/lib"
set_attribute hdl_search_path "/home/sameer23185/Desktop/VDF_PROJECT"
set_attribute library "/home/sameer23185/Desktop/VDF_PROJECT/lib/90/fast.lib"
read_hdl Master.v
elaborate
read_sdc Min_area.sdc
set_attribute hdl_preserve_unused_register true
set_attribute delete_unloaded_seqs false
set_attribute optimize_constant_0_flops false
set_attribute optimize_constant_1_flops false
set_attribute optimize_constant_latches false
set_attribute optimize_constant_feedback_seqs false
#set_attribute prune_unsued_logic false
synthesize -to_mapped -effort medium
write_hdl > report/HDL_min_Netlist.v
write_sdc > report/constraints.sdc
write_script > report/synthesis.g
report_timing > report/synthesis_timing_report.rep
report_power > report/synthesis_power_report.rep
report_gates > report/synthesis_cell_report.rep
report_area > report/synthesis_area_report.rep
gui_show

**********************************************

WHEN I COMPARING MY GOLDEN.V WITH HDL_min_Netlist.v  during   conformal , I got  these  non-equivalent   point   for   every reg memory and for every data memory. I don't know what to do with these non-equivalent point. I've been stuck here for the past four days. Please help me in this and how can I remove this non- equivalent point , since I am new to this I really don't know what to do.




al

how to tell conformal to ignore certain combination of input

hi

How can I tell the LEC tool to ignore a combination of Primary input bus in both Golden and revised.

For example in both Golden and revised there is 

input [3:0] data_in

I want LEC not to check the case that data_in[3:0] == 4'b1000




al

which tools support Linting for early stages of Digital Design flow?

I am trying to understand the Linting process. I know that mainly JasperGold is the tool for this purpose. Though I think JasperGold is more suited for later stages of the design. As a RTL Design Engineer, I want to make sure that if another tool has the capability of doing Linting earlier in the flow. for example, does Xcelium, Genus or Confomal support linting. I have seen some contradicting information online regarding this topic, though I can't find anything related to Linting on any of these tools.

Thanks




al

Xcelium PowerPlayBack App and Dynamic Power Analysis

Learn how Xcelium PowerPlayback App enables the massively parallel Xcelium replay of waveforms for glitch-accurate power estimation of multi-billion gate SoC designs.(read more)





al

Coalesce Xcelium Apps to Maximize Performance by 10X and Catch More Bugs

Xcelium Simulator has been in the industry for years and is the leading high-performance simulation platform. As designs are getting more and more complex and verification is taking longer than ever, the need of the hour is plug-and-play apps that ar...(read more)




al

TSN-PTP: A Real-Time Network Clock Synchronizing Protocol

In a network containing multiple nodes, the need for synchronization between the various nodes is not just instrumental but also a complicated and highly complex process. This process becomes even more tricky if we synchronize the clocks between the Manager and the Peripheral. As we know, in a real-time network, some of the nodes would behave like Managers while some would be a Peripheral. If we must make the communication process smooth, then the local clocks of these nodes must be synchronized. 

The problem with this synchronization is that we have the clock running in the Manager as well. If we send the value of the Manager clock to the Peripheral, the synchronization doesn’t happen as we have a propagation delay of the messages, along with the propagation delay of the electronic circuits of Manager and the Peripheral.  

The cherry on the cake is that these electronic circuit propagation delays are not random and remain constant, so we can add a time offset to it to match the clock. To tackle this challenge, IEEE has come up with a protocol named “Precision Timing Protocol.” 

 

Operation of PTP: 

To synchronize the clocks, a Sync message is sent by the Manager to the Peripheral, which then timestamps the receiving time of the same. Following this, a ‘Follow up’ message is issued by the Manager stating the timestamp at which the Sync message was sent. 

The Peripheral then finds the difference between the two values and adds this to its current time. After this, the time difference between the Manager and the Peripheral narrows down to only the propagation delay of the messages.  

To overcome this, the Peripheral issues a ‘Delay Request’ to the Manager, and the Manager, in turn, issues a ‘Delay Response.’ Both these messages have the timestamp of when they were issued. The time at which they are received is then noted. Since two messages are sent, one from the Peripheral and the other from the Manager, there are two propagation delays. Then half of this value is our propagation delay. 

The Peripheral then adds this propagation delay to its clock, and hence the clock gets synchronized. 

Advantages of PTP: 

  1. It provides accurate time stamping. 
  2. It is a well-known clock synchronization protocol. 
  3. It provides intensified security inside the premises. 
  4. It provides the possibility of setting coordinated actions and synchronized communication. 

There are various versions of PTP that have been developed over time, namely PTPv1, PTPv2, PTPv2_1, and the latest PTP-AS. 

Cadence Verification IP for Ethernetis available to support the newer version of PTP, allowing simulation of the device for efficient IP, SoC, and system-level design verification. Semiconductor companies can start using it to fully verify their controller design and achieve functional verification closure on it within no time. 




al

BoardSurfers: Optimizing RF Routing and Impedance Using Allegro X PCB Editor

Achieving optimal power transfer in RF PCBs hinges on meticulously routed traces that meet specific impedance requirements. Impedance matching is essential to ensure that traces have the same impedance to prevent signal reflection and inefficient pow...(read more)




al

The Mechanical Side of Multiphysics System Simulation

Introduction

Multiphysics is an integral part of the concepts around digital twins. In this post, I want to discuss the mechanical aspects of multiphysics in system simulations, which are critical for 3D-IC, multi-die, and chiplet design.

The physical world in which we live is growing ever more electrified. Think of the transformation that the cell phone has brought into our lives, as has the present-day migration to electronic vehicles (EVs). These products are not only feats of electronic engineering but of mechanical as well, as the electronics find themselves in new and novel forms such as foldable phones and flying cars (eVOTLs). Here, engineering domains must co-exist and collaborate to bring about the best end products possible.

Start with the electronics—chips, chiplets, IC packaging, PCB, and modules. But now put these into a new form factor that can be dropped or submerged in water or accelerated along a highway. What about drop testing, aerodynamics, and aeroacoustics? These largely computational fluid dynamics (CFD) and/or mechanical multiphysics phenomena must also be accounted for. And then how does the drop testing impact the electrical performance? The world of electronics and its vast array of end products is pushing us beyond pure electrical engineering to be more broadly minded and develop not only heterogeneous products but heterogeneous engineering teams as well.

Cadence's Unique Expertise

It's at this crossroad of complexity and electronic proliferation that Cadence shines. Let's take, for example, the latest push for higher-performing high-bandwidth memory (HBM) devices and AI data center expansion. These technologies are growing from several layers to 12, and I can't emphasize enough the importance of teamwork and integrated solutions in tackling the challenges of advanced packaging technologies and how collaboration is shaping the future of semiconductor innovation and paving the way for cutting-edge developments in the industry.

These layered electronics are powered, and power creates heat. Heat needs to be understood, and thus, the thermal integrity issues uncovered along the way must be addressed. However, electronic thermal issues are just the first domino in a chain of interdependencies. What about the thermal stress and warpage that can be caused by the powering of these stacked devices? How does that then lend to mechanical stress and even material fatigue as the temperature cycles from high to low and back through the use of the electronic device? This is just one example in a long list of many...

Cadence Multiphysics Analysis Offerings

The confluence of electrical, mechanical, and CFD is exactly why Cadence expanded into multiphysics at a significant rate starting in 2019 with the announcement of the Clarity 3D Solver and Celsius Thermal Solver products for electromagnetic (EM) and thermal multiphysics system simulations. Recent acquisitions of Numeca, Pointwise, and Cascade (now branded within Cadence as the Fidelity CFD Platform) as well as Future Facilities (now the Cadence Reality Digital Twin product line) are all adding CFD expertise. The recent addition of Beta CAE brings mechanical multiphysics to the suite of solutions available from Cadence. The full breadth of these multiphysics system analyses, spanning EM, thermal, signal integrity/power integrity (SI/PI), CFD, and now mechanical, creates a platform for digital twinning across a wide array of applications. You can learn more by viewing Cadence's Reality Digital Twin platform launch on the keynote stage at NVIDIA's GTC in March, as well as this Designed with Cadence video: NV5, NVIDIA, and Cadence Collaboration Optimizes Data Centers.

Conclusion

Ever more sophisticated electronic designs are in demand to fulfill the needs of tomorrow's technologies, driving a convergence of electrical and mechanical aspects of multiphysics in system simulations. To successfully produce the exciting new products of the future, both domains must be able to collaborate effectively and efficiently. Cadence is fully committed to developing and providing our customers with the software products they need to enable this electrical/mechanical evolution. From EM, to thermal, to SI/PI, CFD, and mechanical, Cadence is enabling digital twinning across a wide array of applications that are forging pathways to the future.

For more information on Cadence's multiphysics system analysis offerings, visit our webpage and download our brochure.




al

DesignCon Best Paper 2024: Addressing Challenges in PDN Design

Explore Impacts of Finite Interconnect Impedance on PDN Characterization

Over the past few decades, many details have been worked out in the power distribution network (PDN) in the frequency and time domains. We have simulation tools that can analyze the physical structure from DC to very high frequencies, including spatial variations of the behavior. We also have frequency- and time-domain test methods to measure the steady-state and transient behavior of the built-up systems.

All of these pieces in our current toolbox have their own assumptions, limitations, and artifacts, and they constantly raise the challenging question that designers need to answer: How to select the design process, simulation, measurement tools, and processes so that we get reasonable answers within a reasonable time frame with a reasonable budget.

Read this award-winning DesignCon 2024 paper titled “Impact of Finite Interconnect Impedance Including Spatial and Domain Comparison of PDN Characterization.” Led by Samtec’s Istvan Novak and written with a team of nine authors from Cadence, Amazon, and Samtec, the paper discusses a series of continually evolving challenges with PDN requirements for cutting-edge designs.

Read the full paper now: “Impact of Finite Interconnect Impedance Including Spatial and Domain Comparison of PDN Characterization.”




al

Using Voltus IC Power Integrity to Overcome 3D-IC Design Challenges

Power network design and analysis of 3D-ICs is a major challenge due to the complex nature and large size of the power network. In addition, designers must deal with the complexity of routing power through the interposer, multiple dies, through-silicon vias (TSVs), and through-dielectric vias (TDVs).
Cadence’s Integrity 3D-IC Platform and Voltus IC Power Integrity Solution provide a fully integrated solution for early planning and analysis of 3D-IC power networks, 3D-IC chip-centric power integrity signoff, and hierarchical methods that significantly improve capacity and performance of power integrity (PI) signoff while maintaining a very high level of accuracy at signoff. This blog summarizes the typical design challenges faced by today’s 3D-IC designers, as discussed in our recent webinar, “Addressing 3D-IC Power Integrity Design Challenges.” Please click here to view the full webinar.

Major Trends in Advanced Chip Design

From chips to chiplets, stacked die, 3D-ICs, and more, three major trends are impacting advanced semiconductor packaging design. The first is heterogenous integration, which we define as a disaggregated approach to designing systems on chip (SoCs) from multiple chiplets. This approach is similar to system-in-package (SiP) design, except that instead of integrating multiple bare die  including 3D stacking – on a single substrate, multiple IPs are integrated in the form of chiplets on a single substrate.

The second major trend is around new silicon manufacturing techniques that leverage silicon vias (TSVs) and high-density fanout RDL. These advancements mean that silicon is becoming a more attractive material for packaging, especially when high bandwidth and form factor become key attributes in the end design. This brings new design and verification challenges to most packaging engineers who typically work with organic and ceramic substrate materials.

Finally, on the ecosystem side, all the large semiconductor foundries now offer their own versions of advanced packaging. This brings new ways of supporting design teams with technologies like reference flows and PDKs, concepts that have typically been lacking in the packaging community. Cadence has worked with many of the leading foundries and outsourced semiconductor assembly and test facilities (OSATs) to develop multi-chip(let) packaging reference flows and package assembly design kits. The downside is that, with the time restrictions designers are under today, there isn’t enough time to simulate the details of these flows and PDKs further.

For those who must make the best electro/thermal/physical decisions to achieve the best power/performance/area/cost (PPAC), factors can include accurate die size estimations, thermal feasibility, die-to-die interconnect planning, interposer planning (silicon/organic), front-to-front and front-to-back (F2F/F2B) planning, layer stack and electromigration/ IR drop (EMIR)/TSV planning, IO bandwidth feasibility, and system-level architecture selection.

3D-IC Power Network Design and Analysis

The key to success in 3D-IC design is early power integrity planning and analysis. Cadence’s Integrity 3D-IC platform is a high-capacity 3D-IC platform that enables 3D design planning, implementation, and system analysis in a single, unified cockpit. Cadence’s Voltus IC Power Integrity Solution is a comprehensive full chip electromigration, IR drop, and power analysis solution. With its fully distributed architecture and hierarchical analysis capabilities, Voltus provides very fast analysis and has the capacity to handle the largest designs in the industry. Typically, 3D-IC PDN design and analysis is performed in four phases, as shown in Figure 1.

Phase 1 - Perform early power delivery network (PDN) exploration with each fabric’s PDN cascaded in system PI with early circuit models.

Phase 2 – Plan 3D-IC PDNs in Cadence’s Integrity 3D-IC platform, including micro bumps, TSVs, and through dielectric vias (TDVs), power grid synthesis for dies, and early rail analysis and optimization.

Phase 3 – Perform full chip-centric signoff in Voltus with detailed die, interposer, and package models, including chip die models, while keeping some dies flat.

Phase 4 – Perform full system-level signoff with Cadence’s Sigrity SystemPI using detailed extracted package models from Sigrity XtractIM, board models from Sigrity PowerSI or Clarity 3D Solver, interposer models from XtractIM or Voltus, and chip power models from Voltus.

Figure 1. 3D-IC PDN design and analysis phases

3D-IC Chip-Centric Signoff

The integration of Integrity 3D-IC and Voltus enables chip-centric early analysis and signoff. Figure 2 and Figure 3 highlight the chip centric early PI optimization and signoff flows. In early analysis, the on-chip power networks are synthesized, and the micro bumps and TSVs can be placed and optimized. In the signoff stage, all the detailed design data is used for power analysis, and detailed models are extracted and used for package, interposer, and on-die power networks.


Figure 2. Early chip-centric PI analysis and optimization flow

Figure 3. Chip-centric 3D-IC PI signoff

Hierarchical 3D-IC PI Analysis

To improve the capacity and performance of 3D-IC PI analysis, Voltus enables hierarchical analysis using chiplet models. Chiplet models can be reduced chip models in spice format or more accurate xPGV models which are highly accurate proprietary models generated by Voltus. With xPGV models, the hierarchical PI analysis has almost the same accuracy as flat analysis but offers 10X or higher benefit in runtime and memory requirements.

Conclusion

This blog has highlighted the major design trends enabled by advanced 3D packaging and the design challenges arising from these advancements. The design of power delivery networks is one of these major challenges. We have discussed Cadence solutions to overcome this PI challenge. To learn more, view our recent webinar, "Addressing 3D-IC Power Integrity Design Challenges" and visit the Voltus web page.




al

Cadence OrCAD X and Allegro X 24.1 is Now Available

The OrCAD X and Allegro X 24.1 release is now available at Cadence Downloads. This blog post provides links to access the release and describes some major changes and new features.   OrCAD X /Allegro X 24.1 (SPB241) Here is a representative li...(read more)




al

Modern Thermal Analysis Overcomes Complex Design Issues

Melika Roshandell, Cadence product marketing director for the Celsius Thermal Solver, recently published an article in Designing Electronics discussing how the use of modern thermal analysis techniques can help engineers meet the challenges of today’s complex electronic designs, which require ever more functionality and performance to meet consumer demand.

Today’s modern electronic designs require ever more functionality and performance to meet consumer demand. These requirements make scaling traditional, flat, 2D-ICs very challenging. With the recent introduction of 3D-ICs into the electronic design industry, IC vendors need to optimize the performance and cost of their devices while also taking advantage of the ability to combine heterogeneous technologies and nodes into a single package. While this greatly advances IC technology, 3D-IC design brings about its own unique challenges and complexities, a major one of which is thermal management.

To overcome thermal management issues, a thermal solution that can handle the complexity of the entire design efficiently and without any simplification is necessary. However, because of the nature of 3D-ICs, the typical point tool approach that dissects the design space into subsections cannot adequately address this need. This approach also creates a longer turnaround time, which can impact critical decision-making to optimize design performance. A more effective solution is to utilize a solver that not only can import the entire package, PCB, and chiplets but also offers high performance to run the entire analysis in a timely manner.

Celsius Thermal Management Solutions

Cadence offers the Celsius Thermal Solver, a unique technology integrated with both IC and package design tools such as the Cadence Innovus Implementation System, Allegro PCB Designer, and Voltus IC Power Integrity Solution. The Celsius Thermal Solver is the first complete electrothermal co-simulation solution for the full hierarchy of electronic systems from ICs to physical enclosures. Based on a production-proven, massively parallel architecture, the Celsius Thermal Solver also provides end-to-end capabilities for both in-design and signoff methodologies and delivers up to 10X faster performance than legacy solutions without sacrificing accuracy.

By combining finite element analysis (FEA) for solid structures with computational fluid dynamics (CFD) for fluids (both liquid and gas, as well as airflow), designers can perform complete system analysis in a single tool. For PCB and IC packaging, engineering teams can combine electrical and thermal analysis and simulate the flow of both current and heat for a more accurate system-level thermal simulation than can be achieved using legacy tools. In addition, both static (steady-state) and dynamic (transient) electrical-thermal co-simulations can be performed based on the actual flow of electrical power in advanced 3D structures, providing visibility into real-world system behavior.

Designers are already co-simulating the Celsius Thermal Solver with Celsius EC Solver (formerly Future Facilities’ 6SigmaET electronics thermal simulation software), which provides state-of-the-art intelligence, automation, and accuracy. The combined workflow that ties Celsius FEA thermal analysis with Celsius EC Solver CFD results in even higher-accuracy models of electronics equipment, allowing engineers to test their designs through thermal simulations and mitigate thermal design risks.

Conclusion

As systems become more densely populated with heat-dissipating electronics, the operating temperatures of those devices impact reliability (device lifetime) and performance. Thermal analysis gives designers an understanding of device operating temperatures related to power dissipation, and that temperature information can be introduced into an electrothermal model to predict the impact on device performance. The robust capabilities in modern thermal management software enable new system analyses and design insights. This empowers electrical design teams to detect and mitigate thermal issues early in the design process—reducing electronic system development iterations and costs and shortening time to market.

To learn more about Cadence thermal analysis products, visit the Celsius Thermal Solver product page and download the Cadence Multiphysics Systems Analysis Product Portfolio.




al

Allegro X APD: SPB 23.1 release —Your freedom to design boldly!

Cadence is super excited to announce SPB 23.1 release —Your freedom to design boldly 

These tools help engineers build better PCBs faster with the new 3D engine and optimized interface.  

We have been hard at work to bring you this release and believe that it will help you take control of the PCB design process with the powerful new features in Allegro X APD like: 

  • Packaging Support in 3DX Canvas 

  • 3DX Wire DRCs 

  • Aligning Components by Offset 

  • Text Wizard Enhancements 

  • Device File Reuse for Existing Components for Netlist and Logic Import 

 

Watch this space to know all about What’s New in SPB 23.1.  

 

Regards 

Team PCBTech 

Cadence Design System 

For individuals, small businesses, or teams, START YOUR FREE TRIAL. 

 




al

Relative delay analysis is impacted by pbar

Does anyone know how to not include a pbar in a constraint manager analysis? I have some relative delay constraints applied on a group of differential nets. When I analyze the design these all show an error. If I delete the plating bar from the design they are all passing. The plating bar gets generated on the Substrate Geometry / Plating_Bar class. I understand that I could just delete the plating bar to verify the constraint but the issue is when I archive this design I would like it to be clean meaning it is in the final state for manufacturing AND passing all constraints according to design reviews.

Anyone have an idea? 

Thank you!




al

Aligning Components using Offset Mode in Allegro X APD

Starting SPB 23.1, in Allegro X PCB Editor and Allegro X Advanced Package Designer, you can align components by using offset mode. Earlier only spacing mode was available.

Follow these steps to Align Components using Offset Mode:

  1. Set Application Mode to Placement Edit.
  2. Drag the components that need to be aligned and right-click and choose Align Components.
  3. Now, in the Options tab, you will notice Spacing Section with Equal Offset. You can equally and individually offset the components by using the +/- buttons for increment or decrement.




al

What is Allegro X Advanced Package Designer and why do I not see Allegro Package Designer Plus (APD+) in 23.1?

Starting SPB 23.1, Allegro Package Designer Plus (APD+) has been rebranded as Allegro X Advanced Package Designer (Allegro X APD).

The splash screen for Allegro X APD will appear as shown below, instead of showing APD+ 2023:

For the Windows Start menu in 23.1, it will display as Allegro X APD 2023 instead of APD+ 2023, as shown below

23.1 Start menu 

In the Product Choices window for 23.1, you will see Allegro X Advanced Package Designer in the place of Allegro Package Designer +, as shown below: 

23.1 product title




al

Introducing new 3DX Canvas in Allegro X Advanced Package Designer

Have you heard that starting SPB 23.1, Allegro Package Designer Plus (APD+) will be renamed as Allegro X Advanced Package Designer (Allegro X APD)? 

Allegro X APD offers multiple new features and enhancements on topics like Via Structures, Wirebond, Etchback, Text Wizards, 3D Canvas, and more. 

This post presents the new 3DX Canvas introduced in SPB 23.1. This can be invoked from Allegro X APD (from the menu item View > 3DX Canvas). 

Some of the key benefits of the new canvas: 

  • This canvas addresses the scale and complexity in large modern package designs. It provides highly efficient visual representation and implementation of packages. 
  • The new architecture enables high-performance 3D incremental updates by utilizing GPU for fast rendering. 

  • Real-time 3D incremental updates are supported, which means that the 3D view is in sync with all changes to the database. 

  • The new canvas provides 3D visualization support for packaging objects such as wire bonds, ball, die bump/pillar geometries, die stacks, etch back, and plating bar. 

  • This release also introduces the interactive measurement tool for a 3D view of packages. Once you open 3DX Canvas, press the Alt key and you can select the objects you want to measure. 
  • 3DX Canvas provides new 3D DRC Bond Wire Clearances with Real 3D DRC Checks. True 3D DRC in Constraint Manager has been introduced. If you open Constraint Manager, there will be a new worksheet added. Following DRC checks are supported: 
    Wire to Wire 
    Wire to Finger 
    Wire to Shape 
    Wire to Cline 
    Wire to Component




al

How to allow DRCs to the surrounding objects using Etch Back option

Starting from SPB23.1, a new option, Allow DRCs to surrounding metal, has been added in the Etch-Back form to allow DRCs to the surrounding objects. form to allow DRCs to the surrounding objects.

The Allow DRCs to surrounding metal option lets you see and adjust objects instead of the current behavior, which sacrifices the width of the mask for the trace.

  • When this option is turned off, it maintains the EB mask to another object clearance.
  • When this option is enabled, it keeps the EB mask to the EM trace edge clearance and shows a DRC if the EB mask to another object spacing is out of rule.




al

How to access the Transmission Line Calculator in Allegro X APD

Have you ever thought of a handy utility to specify all necessary transmission line parameters to decide upon the stackup?   

Starting SPB 23.1, a handy feature Transmission Line Calculator, is built into Allegro X Advanced Package Designer (Allegro X APD). This feature will require either an SiP Layout license or can be accessed through SiP Layout Bundle. 

From the Analyze dropdown menu in the 23.1 Allegro X APD toolbar, you can choose Transmission Line Calculator. 

 

You can use this calculator to help decide constraints and stackup for laminate-based PCB or Packages. You can calculate the correct stackup material and width/spacing to meet any requirements that may be later entered in a constraint. This is truly a calculated number and not a true field solver. 

The different types of calculations that the Transmission Line Calculator can provide are Microstrip, Embedded microstrip, Stripline, CPW (Coplanar), FGCPW (frequency-dependent Coplanar),Asymmetric stripline, Coupled microstrip (Differential Pair), Coupled stripline (Differential Pair), and Dual striplines. 

This feature is important for customers relying on fabricators/spreadsheets to provide this information or need to test a quick spacing/width as per the impedance value. 

Let us know your comments on this new feature in 23.1 Allegro X APD. 

 




al

Allegro: Tip of the Week : Push Connectivity

At times, there might arise a condition in the design where you need to push the net of selected pins to all its physically connected objects. For example, a few pins are updated with a new net, and it is required to push the new net to all its connected objects. At times, you might update the die or copy routing to other components, when a portion of routing gets the wrong net.

To propagate the net of the pin to all its physically connected objects, Allegro X APD uses the standalone command, Push Connectivity.

You can call the command through Logic > Push Connectivity.

Alternately, you can use the push connectivity command at the command line. Once the command is active, it lets you select pins or symbols that will be used to push net connectivity to all connected objects.

Presently, dynamic shapes and filled rectangles are not considered as part of connectivity. Static shapes are supported.




al

DFA check space of compont to BGA ball or BGA PAD in APD

Hi,

There are mang components in BGA ball side of flipchip package.

Are there DFA check space of compont body or pin soldermask to BGA ball or BGA PAD or bga  soldermask in allegro APD?

I only find space of compont to compont in APD DFA. 




al

Allegro X APD - Tip of the week: Wondering how to set two adjacent layers as conductor layers! Then this post should help you.

By default, a dielectric must separate each pair of conductor layers in the cross-section of a design. In rare cases, this does not represent the real, manufactured substrate.

If your design requires you to have conductor layers that are not separated by a dielectric (such as, for half-etch designs), there is a variable that needs to be set in Allegro X APD. You must set this by enabling the variable icp_allow_adjacent_conductors. This entry, and its location in the User Preferences Editor, are shown in the following image.

The Objects on adjacent conductor layers do not electrically connect together, automatically. A via must be used to establish the inter-layer connections.

When enabling this option, it is recommended to exercise caution because excluding dielectric layers from your cross-section can lead to inaccurate calculations, including the calculations for signal integrity and via heights. It is important that your cross-section accurately reflect the finished product to ensure the most accurate results possible. Any dielectric layers present in the manufactured part need to be in the cross-section for accurate extraction, 3D viewing, and so on.

Let us know your comments on the various designs that would require adjacent conductor layers.




al

Creating Power and Ground rings in Allegro X Package Designer Plus

Power and Ground rings are exposed rings of metal surrounding a die that supply power/ground to the die and create a low-impedance path for the current flow. These rings ensure stable power distribution and reduce noise. Allegro X Package Designer Plus has a utility called Power/Ground Ring Generator which lets you define and place one or more shapes in the form of a ring around a die.

 To run the PWR/GND Generator Wizard, go to Route > Power/Ground Ring Generator or type "pring wizard" in the APD command window to invoke the Wizard.

   

This Wizard lets you define and place one or more shapes in the form of a ring around a die. The Power/Ground Ring Wizard creates up to 12 rings (shapes) at a time. If you require more rings, you can run the Power/Ground Ring Wizard as many times as needed. This command displays a wizard in which you can specify:

  • The number of rings to be generated
  • The creation of the first ring as a die flag (Die flag is the boundary of the die like the power ring.)
    • If you create a die flag and the first ring is the same net as the flag, you can enter a negative distance to overlap the ring and the die flag.
  • Multiple options for placement of the rings with respect to:
    • Origination point
    • Distance from the edge of the die
    • Distance from the nearest die pin on each die side
  • The reference designator of the die with which the rings will be used
  • The distance between rings
  • The width of each ring
  • The corner types on each ring (arc, chamfer, and right-angle)
  • An assigned net name for each ring
  • A label for each ring

The rings are basic in nature. For other shape geometries or split rings, choose Shape > Polygon or Shape > Compose/Decompose Shape from the menu in the design window.

Depending on the options selected, the Power/Ground Ring Wizard UI changes, representing how the rings will be created. Verify the Wizard settings to ensure that the rings are created as intended.

  1. When the Power/Ground Ring Wizard appears, set the number of rings to 2, accept the other defaults, and click Next. You can set Create first ring as die flag to create a basic die flag.

         2. Define Ring 1 and the net associated with it.

              a) Browse and choose Vss in the Net Names dialog box.

            b) Click OK.

            c) Specify the label as VSS.

            d) Click Next.

             The first ring should appear in your design. It is associated with the proper net; in this case, VSS.

  1. For the second ring, choose the net as Vdd and specify the label as VDD.
  2. Click Next.
  3. Click Finish in the Result Verification screen to complete the process.

The completed rings appear as shown below.

Now, when you click on Power and Ground Die Pin and add wirebonds, you will see that the wirebonds are placed directly on the Power and Ground rings.




al

Allegro X APD : Tip of the Week: ‘Auto-blank other rats’ feature

When working on a complex design, it is common to have very many net ratlines. Quantities like 1000 ratlines are possible. It can result in a cluttered view while routing. Therefore, it is useful to make all other ratlines invisible while routing interactively. You would like to make all ratlines visible again when each route action is completed.

You can easily do this by enabling the Auto-blank other rats option during routing. When enabled, all rats other than the primary ones are suppressed during the Add Connect command.




al

How to transfer etch/conductor delays from Allegro Package Designer (APD) to pin delays in Allegro PCB Editor

The packaging group has finished their design in Allegro Package Designer (APD) and I want to use the etch/conductor delay information from the mcm file in the board design in Allegro PCB Designer. Is there a method to do this?

This can be done by exporting the etch/conductor data from APD and importing it as PIN_DELAY information into Allegro PCB Editor.

If you are generating a length report for use in Allegro Pin Delay, you should consider changing the APD units to Mils and uncheck the Time Delay Report.

In Allegro Package Designer:

  1. Select File > Export > Board Level Component.
  2. Select HDL for the Output format and select OK.

       3. Choose a padstack for use when generating the component and select OK.

This will create a file, package_pin_delay.rpt, in the component subdirectory of the current working directory. This file will contain the etch/conductor delay information that can be imported into Allegro.

In Allegro PCB Editor:

  1. Make sure that the device you want to import delays to is placed in your board design and is visible.
  2. Select File > Import > Pin delay.
  3. Browse to the component directory and select package_pin_delay.rpt. The browser defaults to look for *.csv files so you will need to change the Files of type to *.* to select the file.
  4. You may be prompted with an error message stating that the component cannot be found and you should select one. If so, select the appropriate component.
  5. Select Import.
  6. Once the import is completed, select Close.

Note: It is important that all non-trace shapes have a VOLTAGE property so they will not be processed by the the 2D field solver. You should run Reports > Net Delay Report in APD prior to generating the board-level component. This will display the net name of each net as it is processed. If you miss a VOLTAGE property on a net, the net name will show in the report processing window, and you will know which net needs the property.




al

Training Insights – Palladium Emulation Course for Beginner and Advanced Users

The Cadence Palladium Emulation Platform is a hardware system that implements the design, accelerating its execution and verification. Itoffers the highest performance and fastest bring-up times for pre-silicon validation of billion-gate designs, using a custom processor built by Cadence.

This Palladium Introduction course is based on the Palladium 23.03 ISR4 version and covers the following modules:

  • Introduction
  • Palladium flow
  • Running a design on the Palladium system

This course starts with an “Introduction” module that explains Palladium and other verification platforms to show its place in the big picture. It also compares Palladium with Protium and simulation and discusses its usage and limitations.

The “Palladium Flow” module includes two stages at a high level, which are Compile and Run. Then, it covers these stages in detail. First, it covers the ICE compile flow and IXCOM compile flow steps in detail. Then it explains Run, which is common for both ICE and IXCOM modes.

The third module, “Running Design on the Palladium System,” covers all the items required for running your design on the Palladium system, including:

  • Software stack requirements
  • Basic concepts required to understand the flow
  • Compute machine requirements

In addition, this course contains labs for both the ICE and IXCOM flows with detailed steps to exercise the features provided by the Palladium system. The lab explains a practical example of multiple counters and exercising their signals for force, monitor, and deposit features, along with frequency calculation using a real-time clock. The course is available on the Cadence support page:

There is also a Digital Badge available. You will find the Badge exam opportunity when you enroll in the Online training or after you have taken the training as "live" training.

For questions and inquiries, or issues with registration, reach out to us at Cadence Training. Want to stay up to date on webinars and courses? Subscribe to Cadence Training emails. To view our complete training offerings, visit the Cadence Training website.

Related Training Bytes

Related Courses

Related Blogs




al

Jasper Formal Fundamentals 2403 Course for Starting Formal Verification

The course "Jasper Formal Fundamentals v24.03" introduces formal analysis to those who want to use formal analysis for design or verification. 

To optimally benefit from this course, you must already have sufficient knowledge of the System Verilog assertions to be capable of writing properties for formal verification. Hence, this training provides a module on formal analysis to help cover this essential background. 

In this course, you will learn how to code efficient SVA Properties for formal analysis, understand formal complexity and how to overcome it, and learn the basics of formal coverage.

After completing this course, you will be able to:

  • Define reusable, functionally correct SVA properties that are efficient for formal tools. These shall use abstract auxiliary code to simplify descriptions, make code maintenance easier, reduce debug time, and reduce tool-proof runtime.
  • Set up, run, and analyze results from formal analysis.
  • Identify designs upon which formal is likely to be successful while understanding formal complexity issues and how to identify and overcome them.
  • Use a systematic property development process to approach a completely new verification problem.
  • Understand the basics of formal coverage.

 The most recently updated release includes new modules on:

  • "Basic complexity handling" which discusses the complexity in formal and how to identify and handle them.
  • "Complexity reduction methods” which discusses the complexity reduction methods and which is suitable for which type of complexity problem.
  • “Coverage in formal” which discusses the basics of coverage in formal verification and how coverage can be used in formal.   

Take this course to learn the basics of formal verification. 

What's Next? 

You can check out the complete training: Jasper Formal Fundamentals. There is a free online version of the training available 24/7 for all customers with a Cadence Learning and Support Portal account. If you are interested in an instructor-led version of the training, please contact Cadence Training. And don't forget to obtain your digital badge after completing the training!

You can also check Jasper University page for more materials on formal analysis and Jasper apps. 

Related Trainings 

Jasper Formal Expert Training Course | Cadence

Verilog Language and Application Training Course | Cadence

SystemVerilog for Design and Verification Training Course | Cadence

SystemVerilog Assertions Training Course | Cadence

Related Training Bytes 

Jasper Formal Property Verification (FPV) App: Basic Usage Demo (Video)

Jasper Formal Methodology playlist

Related Training Blogs

It’s the Digital Era; Why Not Showcase Your Brand Through a Digital Badge!

Training Insights: Introducing the C++ Course for All Your C++ Learning Needs!

Training Insights: Reaching Your Verification Closure Using Verisium Manager

Training Insights - Free Online Courses on Cadence Learning and Support Portal




al

Partial Header Encryption in Integrity and Data Encryption for PCIe

Cadence PCIe/CXL VIP support for Partial Header Encryption in Integrity and Data Encryption.(read more)




al

Deferrable Memory Write Usage and Verification Challenges

The application of real-time data processing or responsiveness is crucial, such as in high-performance computing, data centers, or applications requiring low-latency data transfers. It enables efficient use of PCIe bandwidth and resources by intelligently managing memory write operations based on system dynamics and workload priorities. By effectively leveraging Deferrable Memory Write [DMWr], Devices can achieve optimized performance and responsiveness, aligning with the evolving demands of modern computing applications.

What Is Deferrable Memory Write?

Deferrable Memory Write (DMWr) ECN introduced this new memory transaction type, which was later officially incorporated in PCIe 5.0 to CXL2.0. This enhanced type of memory transaction is Deferrable Memory Write [DMWr], which flows as another type of existing Read/Write memory transaction; the major difference of this Deferrable Memory Write, where the Requester attempts to write to a given location in Memory Space using the non-posted DMWr TLP Type, it Postponing their completion of memory write transactions to improve overall system efficiency and performance, those memory write operation can be delay or deferred until other priority task complete.

The Deferrable Memory Write (DMWr) requires the Completer to return an acknowledgment to the Requester and provides a mechanism for the recipient to defer (temporarily refuse to service) the Request.

DMWr provides a mechanism for Endpoints and hosts to choose to carry out or defer incoming DMWr Requests. This mechanism can be used by Endpoints and Hosts to simplify the design of flow control, reduce latency, and improve throughput. The Deferrable Memory writes TLP format in Figure A.

 

(Fig A) Deferrable Memory writes TLP format.

Example Scenario

Here's how the DMWr works with a simplified example: Imagine a system with an endpoint device (Device A) and a host CPU (Device B). Device B wants to write data to Device A's memory, but due to varying reasons such as system bus congestion or prioritization of other transactions, Device A can defer the completion of the memory write request. Just follow these steps:

  1. Initiation of Memory Write: Device B initiates a memory write transaction to Device A. This involves sending the memory write request along with the data payload over the PCIe physical layer link.
  2. Acknowledgment and Deferral: Upon receiving the memory write request, Device A acknowledges the transaction but may decide to defer its completion. Device A sends an acknowledgment (ACK) back to Device B, indicating it has received the data and intends to complete the write operation but not immediately.
  3. Deferred Completion: Device A defers the completion of the memory write operation to a later, more opportune time. This deferral allows Device A to prioritize other transactions or optimize the use of system resources, such as memory bandwidth or processor availability.
  4. Completion and Response: At a later point, Device A completes the deferred memory write operation and sends a completion indication back to Device B. This completion typically includes any status updates or additional information related to the transaction.

Usage or Importance of DMWr

Deferrable Memory Write usage provides the improvement in the following aspects:

  • Reduced Latency: By deferring less critical memory write operations, more critical transactions can be processed with lower latency, improving overall system responsiveness.
  • Improved Efficiency: Optimizes the utilization of system resources such as memory bandwidth and CPU cycles, enhancing the efficiency of data transfers within the PCIe architecture.
  • Enhanced Performance: Allows devices to manage and prioritize transactions dynamically, potentially increasing overall system throughput and reducing contention.

Challenges in the Implementation of DMWr Transactions

The implementation of deferrable memory writes (DMWr) introduces several advancements and challenges in terms of usage and verification:

  1. Timing and Synchronization: DMWr allows transactions to be deferred, complicating timing requirements or completing them within acceptable timing windows to avoid protocol violations. Ensuring proper synchronization between devices becomes critical to prevent data loss or corruption.
  2. Protocol Compliance: Verification must ensure compliance with ECN PCIe 6.0 and CXL specifications regarding when and how DMWr transactions can be initiated and completed.
  3. Performance Optimization: While DMWr can improve overall system performance by reducing latency, verifying its impact on system performance and ensuring it meets expected benchmarks is crucial.
  4. Error Handling: Handling errors related to deferred transactions adds complexity. Verifying error detection and recovery mechanisms under various scenarios (e.g., timeout during deferral) is essential.

Verification Challenges of DMWr Transactions

The challenges to verifying the DMWr transaction consist of all checks with respect to Function, Timing, Protocol compliance, improvement, Error scenario, and security usage on purpose, as well as Data integrity at the PCIe and CXL.

  1. Functional Verification: Verifying the correct implementation of DMWr at both ends of the PCIe link (transmitter and receiver) to ensure proper functionality and adherence to specifications.
  2. Timing Verification: Validating timing constraints associated with deferring writes and ensuring transactions are completed within specified windows without violating protocol rules.
  3. Protocol Compliance Verification: Checking that DMWr transactions adhere to PCIe and CXL protocol rules, including ordering rules and any restrictions on deferral based on the transaction type.
  4. Performance Verification: Assessing the impact of DMWr on overall system performance, including latency reduction and bandwidth utilization, through simulation and testing.
  5. Error Scenario Verification: Creating and testing scenarios to verify error handling mechanisms related to DMWr, such as timeouts, retries, and recovery procedures.
  6. Security Considerations: Assessing potential security vulnerabilities related to DMWr, such as data integrity risks during deferred transactions or exposure to timing-based attacks.

Major verification challenges and approaches are timing and synchronization verification in the context of implementing deferrable memory writes (DMWr), which is crucial due to the inherent complexities introduced by deferred transactions. Here are the key issues and approaches to address them:

Timing and Synchronization Issues

  1. Transaction Completion Timing:
    • Issue: Ensuring deferred transactions are completed within the specified time window without violating protocol timing constraints.
    • Approach: Design an internal timer and checker to model worst-case scenarios where transactions are deferred and verify that they are complete within allowable latency limits. This involves simulating various traffic loads and conditions to assess timing under different scenarios.
  2. Ordering and Dependencies:
    • Issue: Verifying that transactions deferred using DMWr maintain the correct ordering and dependencies relative to non-deferred transactions.
    • Approach: Implement test scenarios that include mixed traffic of DMWr and non-DMWr transactions. Verify through simulation or emulation that dependencies and ordering requirements are correctly maintained across the PCIe link.
  3. Interrupt Handling and Response Times:
    • Issue: Verify the handling of interrupts and ensure timely responses from devices involved in DMWr transactions.
    • Approach: Implement test cases that simulate interrupt generation during DMWr transactions. Measure and verify the response times to interrupts to ensure they meet system latency requirements.

In conclusion, while deferrable memory writes in PCIe and CXL offer significant performance benefits, their implementation and verification present several challenges related to timing, protocol compliance, performance optimization, and error handling. Addressing these challenges requires rigorous testing and testbench of traffic, advanced verification methodologies, and a thorough understanding of PCIe specifications and also the motivation behind introducing this Deferrable Write is effectively used in the CXL further. Outcomes of Deferrable Memory Write verify that the performance benefits of DMWr (reduced latency, improved throughput) are achieved without compromising timing integrity or violating protocol specifications.

In summary, PCIe and CXL are complex protocols with many verification challenges. You must understand many new Spec changes and consider the robust verification plan for the new features and backward compatible tests impacted by new features. Cadence's PCIe 6.0 Verification IP is fully compliant with the latest PCIe Express 6.0 specifications and provides an effective and efficient way to verify the components interfacing with the PCIe 6.0 interface. Cadence VIP for PCIe 6.0 provides exhaustive verification of PCIe-based IP and SoCs, and we are working with Early Adopter customers to speed up every verification stage.

More Information




al

Sigrity and Systems Analysis 2024.1 Release Now Available

The Sigrity and Systems Analysis (SIGRITY/SYSANLS) 2024.1 release is now available for download at Cadence Downloads . For the list of CCRs fixed in this release, see the README.txt file in the installation hierarchy. SIGRITY/SYSANLS 2024.1 Here is a list of some of the key updates in the SIGRITY/SYSANLS 2024.1 release: For more details about these and all the other new and enhanced features introduced in this release , refer to the following document: Sigrity Release Overview and Common Tools What's New . Supported Platforms and Operating Systems Platform and Architecture X86_64 (lnx86) Windows (64 bit) Development OS RHEL 8.4 Windows Server 2022 Supported OS RHEL 8.4 and above RHEL 9 SLES 15 (SP3 and above) Windows 10 Windows 11 Windows Server 2019 Windows Server 2022 Systems Analysis 2024.1 Clarity 3D Solver Clarity 3D Layout Structure Optimization Workflow : A new workflow, Clarity 3D Layout Structure Optimization Workflow, has been added to Clarity 3D Layout. This workflow integrates Allegro PCB Designer with Clarity 3D Layout for high-speed structure optimization. Component Geometry Model Editor : The new Clarity 3D Layout editor lets you set up ports, solder bumps/balls/extrusions, and two-terminal and multi-terminal circuits using a single GUI. Coaxial Open Port Option Added to Port Setup Wizard : The Coaxial Open Port option lets you create ports for each target net pin and reference net pin in Clarity 3D Layout. The nearby reference net pins are then used as a reference for each target net pin, reducing the number of ports needed. In addition, the ports of unused reference net pins are shorted to the ground. Parametric Import Option Added : Two new options, Parametric Import and Default Import , have been added to the Tools – Launch Clarity3DWorkbench menu. The Parametric Import option lets you import the design along with its parameters into Clarity 3D Workbench. The Default Import option lets you ignore the parameters when importing the design into Clarity 3D Workbench. Component Library Added to Generate 3D Components : Clarity 3D Workbench now includes a new component library that lets you use predefined 3D component templates or add existing 3D components to create 3D designs and simulation models. AI-Powered Content Search Capability : Clarity 3D Workbench and Clarity 3D Transient Solver now support an AI-powered capability for searching the content and displaying relevant information. Expression Parser to Handle Undefined Parameters : Clarity 3D Workbench and Clarity 3D Transient Solver support writing expressions or equations containing undefined parameters in the Property window to describe a simulation variable. The improved expression parser automatically detects any undefined parameter in an expression and prompts users to specify their values. This capability lets you define a model or a simulation variable as a function instead of specifying static values. For detailed information, refer to Clarity 3D Layout User Guide and Clarity 3D Workbench User Guide on the Cadence Support portal. Clarity 3D Transient Solver Mesh Processing Improved to Simulate Large Use Cases : Clarity 3D Transient Solver leverages a new meshing algorithm that enhances overall mesh processing, specifically for large designs and use cases. The new algorithm dramatically improves the mesh quality, minimum mesh size, number of mesh key points, total mesh number, and memory usage. Advanced Material Processing Engine : The material processing capability has been enhanced to handle thin outer metal, which previously resulted in open and short issues in some designs. In addition, the material processing engine offers improved mode extraction for particular use cases, including waveguide and coaxial designs. Characteristic Impedance Calculation Improved : The solver engine now uses a new analytical calculation method to calculate the characteristic impedance of coaxial designs with improved accuracy. For detailed information, refer to Clarity 3D Transient Solver User Guide on the Cadence Support portal. Celsius Studio Celsius Interchange Model Introduced : Celsius Studio now supports Celsius Interchange Model generation, which is a 3D model derived from detailed physical designs for multi-physics and multi-scale analysis. This Celsius Interchange Model file ( .cim ) serves as a design information carrier across Celsius Studio tools, enabling a variety of simulation and analysis tasks . Celsius 3DIC Thermal Workflow Improvements : The Thermal Simulation workflows in Celsius 3DIC have been significantly enhanced. Key improvements include: Advanced Power Setup with Transient Power Function and Multi Mode options Enhanced GUI for the Mesh Control and Simulation Control tabs Improved meshing capabilities Celsius Interchange Model ( .cim ) generation Material library support for block and connections Import of Heat Transfer Coefficients (HTCs) from a CFD file Bump creation through the Bump Array Wizard Layer Stackup CSV file generation Celsius 3DIC Warpage and Stress Workflow Enhancements : The Warpage and Stress workflow in Celsius 3DIC has undergone significant improvements, such as: Improved multi-stage warpage simulation flow for 3DIC packaging process Enhanced GUI for the Mesh Control , Simulation Control , and Stress Boundary Conditions tabs Support for large deformations and temperature profiles Bump creation through the Bump Array Wizard New constraint types Enhanced meshing capabilities Geometric Nonlinearity Support in Warpage and Stress Analysis : Large deformation analysis is now supported in warpage and stress studies. This study uses the Total Lagrangian approach to model geometric nonlinearities in simulation, which allows accurate prediction of final deformations. Thermal Network Extraction and Simulation : In the solid extraction flow in Celsius 3D Workbench, you can now import area-based power map files to create terminals. For designs with multiple blocks, this capability allows automatic terminal creation, eliminating the need to manually create and set up 2D sheets individually. Additionally, thermal throttling feature is now supported in Celsius Thermal Network. This makes it ideal for preliminary analyses or when a quick estimation is required. It runs significantly faster than 3D models, allowing for quicker iterations and more efficient decision-making. For detailed information, refer to the Celsius 3DIC User Guide , Celsius Layout User Guide and Celsius 3D Workbench User Guide on the Cadence Support portal. Sigrity 2024.1 Layout Workbench Improved Graphical User Interface : A new option, Use Improved User Interface , has been added in the Themes page of the Options dialog box in the Layout Workbench GUI. In the new GUI, the toolbar icons and menu options have been enhanced and rearranged. For detailed information, refer to Layout Workbench User Guide on the Cadence Support portal. Broadband SPICE Python Script Integration with Command Line for Simulation Tasks : Broadband SPICE lets you run Python scripts directly from the command line for performing simulation and analysis. The new -py and *.py options make it easier to integrate Python scripts with the command-line operations. This update streamlines the process of automating and customizing simulations from the command line, which makes your simulation tasks faster and easier. For detailed information, refer to Broadband SPICE User Guide on the Cadence Support portal. Celsius PowerDC Block Power Assignment (BPA) File Format Support : PowerDC now supports the BPA file format. Similar to the Pin Location (PLOC) file, the BPA file is a current assignment file that defines the total current of a power grid cell, which is then equally distributed across the power pins within the cell. This provides better control over the power distribution. Ability to Run Multiple IR Drop Cases Sequentially : You can now select multiple result sinks from the Current-Limited IR Drop flow and run IR Drop analysis for them sequentially. PowerDC automatically runs the simulations in sequence after you select multiple result sinks. This saves time by automating the process. Enhanced Support for Mixed Conversion Devices : PowerDC now supports mixing different conversion devices, such as switching regulators and linear regulators within a single DC-DC/LDO instance. This enhancement offers added flexibility by letting you configure each instance in your design according to your specific needs. For detailed information, refer to PowerDC User Guide on the Cadence Support portal. PowerSI Monte Carlo Method Added : A new option, Monte Carlo Method, has been added in the Optimality dialog box. This option lets you create multiple random samples to depict variations in the input parameters and assess the output. Channel Check Optimization Added : The S-Parameter Assessment workflow in PowerSI now supports Channel Check Optimization . It uses the AI-driven Multidisciplinary Analysis and Optimization (MDAO) technology that lets you optimize your design quickly and efficiently with no accuracy loss. For detailed information, refer to PowerSI User Guide on the Cadence Support portal. SPEEDEM Multi-threaded Matrix Solver Support Added : The Enable Multi-threaded Matrix Solver check box has been added that lets you accelerate the simulation speed for high-performance computing. This check box provides two options, Automatic and Always, to include the -lhpc4 or -lhpc5 parameter, respectively, in the SPEEDEM Simulator (SPDSIM) before running the simulation. For detailed information, refer to the SPEEDEM User Guide on the Cadence Support portal. XtractIM Options to Skip or Calculate Special DC-R Simulation Results : The Skip DC_R of Each Path and Only DC_R of Each Path options have been added to the Setup menu. Skip DC_R of Each Path : This option lets you skip the calculation of the DC-R result during the simulation. Other results, such as SPICE T-model , RL_C of Each Path , Coupling of Each Path , etc., are still calculated. Only DC_R of Each Path : This option lets you calculate the DC-R result only during the simulation. Other results, such as SPICE T-model , RL_C of Each Path , Coupling of Each Path , etc., are not calculated. Color Assignment for Pin Matching : The MCP Auto Connection window includes the Display Color Editor , which lets you assign a color for pin matching. It helps you easily identify the matching pins in the left and right sections of the MCP Auto Connection window . Ability to Save Simulations Individually : The Save each simulation individually check box has been added to the Tools - Options - Edit Options - Simulation (Basic) - General form. Select this check box and run the simulation to generate a simulation results folder containing files and logs with a timestamp for each simulation. Reuse of SPD File Settings : The XtractIM setup check box lets you import an existing package setup to reuse the configurations and settings from one .spd file to another. For detailed information, refer to XtractIM User Guide on the Cadence Support portal. Documentation Enhancements Cloud-Based Help System Upgraded The cloud-based help system, Doc Assistant, has been upgraded to version 24.10, which contains several new features and enhancements over the previous 2.03 version. Sigrity Release Team Please send your questions and feedback to sigrity_rmt@cadence.com .




al

Ascent: Training Insights: DE-HDL Libraries in Allegro X System Capture

Allegro X System Capture offers a complete ecosystem for library development. This post introduces the latest DE-HDL Library Development using System Capture course in which you learn how to create different library objects. As a librarian, you often work with numerous libraries. Your tasks include creating or modifying symbols for libraries. To use Allegro X System Capture to create a library, you can follow the steps in the following flowchart: Let’s go through each step in detail. Setting the CDS_SITE Variable Before you start library development for a new project, set the CDS_SITE system environment variable. This step is required to access libraries and other configuration files. Creating a Project in Allegro X System Capture The next step is to create a project in Allegro X System Capture. Adding a Library to the Project Symbol development consists of creating symbol graphics, electrical data, and properties used by different tools in the PCB design flow. To add a library to a project, first create a library in the Libraries pane of the Project e xplorer. Creating Library Symbols The library development process supports the creation of various types of symbols. Creating a Symbol with Multiple Views You can generate multiple views of the same symbol using the Duplicate command. For example, a discrete symbol, such as a resistor, can have multiple views, as shown in the following image: Creating a Split Symbol For advanced designs, you often need to create library symbols and break them into multiple sections to support the design process. When a symbol shows all the logical pins in the physical package, it is called a single-section or flat symbol. Many large ICs have several pins and the symbols need to fit on a single schematic page. One workaround is to use vector pin names on a symbol to reduce its size, although manufacturers prefer schematics that show each pin. You can divide these high-pin count devices into smaller pieces, where each piece is a separate version of the part. Such parts are referred to as split parts or multi-section symbols. For multi-section symbols, you can create two types of split parts—symmetrical and asymmetrical. Symmetrical Split Symbols A symmetrical split symbol has only one symbol graphic, which holds two or more identical logic symbols, each with its own unique physical pin numbers. You can create a symmetrical split symbol using the Duplicate Section icon in the canvas window. Each symbol section contains the same set of pins but different pin numbers, as shown in the following image: Asymmetrical Split Symbols An asymmetrical split symbol is a symbol whose physical package contains one or more unique schematic symbols. You can create an asymmetrical split symbol by clicking the New Section icon in the canvas window. Asymmetrical symbols have a unique set of logical pins, as shown in the following image: Creating Symbols Using the Spreadsheet Interface To simplify the development of large symbols, Allegro X System Capture has a Spreadsheet Interface . You can copy from a spreadsheet into the interface. This saves time and helps minimize errors introduced by manual entry. In conclusion, the DE-HDL library development using Allegro X System Capture course involves several critical steps and supports various symbol creation techniques. This course helps librarians create and modify symbols effortlessly and deepens their understanding of library development within Allegro X System Capture. To learn more about this topic, enroll in the DE-HDL Library Development using Allegro X System Capture course on the Cadence Support portal . Click the training byte link now or visit Cadence Support and search for training bytes under Video Library. If you find the post useful and want to delve deeper into training details, enroll in the following online training course for lab instructions and a downloadable design: DE-HDL Library Development using Allegro X System Capture (Online). You can become Cadence Certified once you complete the course. Cadence Training Services now offers free Digital Badges for all popular online training courses. These badges indicate proficiency in a certain technology or skill and give you a way to validate your expertise to managers and potential employers. You can add the digital badge to your email signature or any social media channels, such as Facebook or LinkedIn, to highlight your expertise. To find out more, see the blog post Take a Cadence Masterclass and Get a Badge . You might also be interested in the training Learning Map that guides you through recommended course flows as well as tool experience and knowledge-level training modules. To find information on how to get an account on the Cadence Learning and Support portal, see here . SUBSCRIBE to the Cadence training newsletter to be updated about upcoming training, webinars, and much more. If you have any questions about courses, schedules, online training, blended/virtual live training, or public, or onsite live training, reach out to us at Cadence Training .




al

Solutions to Maximize Data Center Performance Featured at OCP Global Summit 2024

The demand for higher compute performance, energy efficiency, and faster time-to-market drove the conversations at this year's Open Compute Project (OCP) Global Summit in San Jose, California. It was the scene of showcasing groundbreaking innovations, expert-led sessions, and networking opportunities to drive the future of data center technology. For those who didn't get to attend or stop by our booth, here's a recap of Cadence's comprehensive solutions that enable next-generation compute technology, AI data center design, analysis, and optimization. Optimized Data Center Design and Operations As the data center community increasingly faces demands for enhanced efficiency, thermal management, sustainability, and performance optimization, data center operators, IT managers, and executives are looking for solutions to these challenges. At the Cadence booth, attendees explored the Cadence Reality Digital Twin Platform and Celsius EC Solver. These technologies are pivotal in achieving high-performance standards for AI data centers, providing advanced digital twin modeling capabilities that redefine next-generation data center design and operation. The Celsius EC Solver demonstration showed how it solves challenging thermal and electronics cooling management problems with precision and speed. CadenceCONNECT: Take the Heat Out of Your AI Data Center Cadence hosted a networking reception on October 16 titled "Take the Heat Out of Your AI Data Center." In today's AI era, managing the heat generated by high-density computing environments is more critical than ever. This reception offered insights into current and emerging data center technologies, digital twin cooling strategies that deliver energy-saving operations, and a chance to engage with industry leaders, Cadence experts, and peers to explore the latest cooling, AI, and GPU acceleration advancements. Here's a recap: Researcher, author, and entrepreneur Dr. Jon Koomey highlighted the inefficiency of data centers in his talk "The Rise of Zombie Data Centers," noting that 20-30% of their capacity is stranded and unused. He advocated for organizational changes and technological solutions like digital twins to reduce wasted energy and improve computational effectiveness as AI deployments increase. In "A New Millennium in Multiphysics System Analysis," Cadence Corporate VP Ben Gu explained the company's significant strides in multiphysics system analysis, evolving from chip simulation to a broader application of computational software for simulating various physical systems, including entire data centers. He noted that the latest Cadence venture, a digital twin platform for data center optimization, opened the opportunity to use simulation technology to optimize the efficiency of data centers. Senior Software Engineering Group Director Albert Zeng highlighted the Cadence Reality DC suite's ability to transform data center operations through simulation, emphasizing its multi-phase engine for optimal thermal performance and the integration of AI capabilities for enhanced design and management. A panel discussion titled "Turning AI Factory Blueprints into Reality at the Speed of Light" featured industry experts from NVIDIA, Norman Wright Precision Environmental and Power, NV5, Switch Data Centers, and Cadence, who explored the evolving requirements and multidimensional challenges of AI factories, emphasizing the need for collaboration across the supply chain to achieve high-performing and sustainable data centers. Watch the highlights. Transforming Designs from Chips to Data Centers The OCP Global Summit 2024 has reaffirmed its status as a pivotal event for data center professionals seeking to stay at the forefront of technological advancements. Cadence's contributions, from groundbreaking digital twin technologies to innovative cooling strategies, have shed light on the path forward for efficient, sustainable data centers. For data center professionals, IT managers, and engineers, the insights gained at this summit are invaluable in navigating the challenges and opportunities presented by the burgeoning AI era. Partnering with Arm Arm Total Design Cadence is a member of the Arm Total Design program. At an invitation-only special Arm event, Cadence's VP of Research and Development, Lokesh Korlipara, delivered a presentation focusing on data center challenges and design solutions with Arm Neoverse Compute Subsystem (CSS). The session highlighted: Efficient integration of Arm Neoverse CSS into system on chips (SoCs) with pre-integrated connectivity IP Performance analysis and verification of the Neoverse CSS integration into the SoC through Cadence's System VIP verification suite and automated testbench creation, enhancing both quality and productivity Jumpstarting designs through Cadence's collaboration with Arm for 3D-IC system planning, chiplets, and interposers Design Services readiness and global scale to support and/or deliver the most demanding Arm Neoverse CSS-based SoC design projects Cadence Supports Arm CSS in Arm Booth During the event, Cadence conducted a demo in the Arm booth that showcased the Cadence System VIP verification suite. The demo highlighted automated testbench creation and performance analysis for integrating the Arm CSS into SoCs while enhancing verification quality and productivity. Summary Cadence offers data center solutions for designing everything from the compute and networking chips to the board, racks, data centers, and campuses. Stay connected with Cadence and other industry leaders to continue exploring the innovations set to redefine the future of data centers. Learn More Cadence Joins Arm Total Design Cadence Arm-Based Solutions Cadence Reality Digital Twin Platform




al

Celebrating Milestones: The Cadence Bangalore Toastmasters Club’s Journey

On November 5, 2024, the Cadence Bangalore Toastmasters Club celebrated a significant milestone by hosting its 50th meeting. Established in December 2020, the club was created to provide a supportive environment for individuals looking to improve their communication and leadership skills. Over the years, the club has evolved into a vibrant community filled with success stories of personal development and newfound confidence. A testament to the club's dedication is its achievement of the "Select Distinguished Club" status during the 2023-2024 program year. By fulfilling 7 out of 10 distinguished goals, the club highlighted its commitment to excellence—a success driven by its vibrant members' relentless focus and perseverance. The strategic insight gained from regular Toastmasters committee meetings and the influential "Moments of Truth" sessions held in 2023 and 2024 are key to this success. Our club members have consistently demonstrated strong performance in various speech contests, with notable achievements across multiple levels. In 2023, members excelled in Evaluation and Table Topics contests, reaching the district level while advancing to the Division Level in the International Speech Contest. Continuing their success into 2024, members again qualified for area-level contests, securing third-place positions in the Evaluation and Table Topics categories, highlighting the club's dedication and competitive spirit. The 50th meeting was based on the theme of serendipity. It was not only a milestone celebration but also a vibrant festival of achievements and growth. The day buzzed with energy as activities like a spirited Treasure Hunt injected enthusiasm and camaraderie among attendees. Distinguished guests, including Kripa Venkitachalam and Madhavi Rao, enriched the occasion with inspiring speeches. Madhavi reignited the club's spirit, while Kripa's discourse on the Growth Mindset and the "Power of Yet" encouraged members to pursue continuous self-improvement. The Cadence Bangalore Toastmasters Club is enthusiastic about its promising future and is committed to creating an environment that promotes personal and professional growth. Many members are close to completing their Toastmasters levels and pathways, and this term, a new group of approximately 30 individuals has joined, bringing the total membership to 52. This vibrant community is just beginning its journey and is eager to reach new milestones together through mutual support and a shared commitment to excellence. The transformations experienced by many club members are truly compelling. They often share how the club has significantly improved their communication skills and boosted their confidence. One member recalls, "Before joining, I found public speaking intimidating. Now, I embrace every opportunity to share my ideas." Another member highlights how the club's supportive environment helped him overcome his fear of public speaking, propelling his career to new heights. This culture of constructive feedback and continuous improvement has inspired countless members to pursue their dreams with renewed determination and optimism. The Cadence Bangalore Toastmasters Club's journey is a living testament to the power of community and the potential within each of us to grow and achieve greatness. As the club continues to evolve and inspire, it serves as a beacon for those aspiring to transform their skills and seize their moment in the spotlight. Learn more about life at Cadence.




al

Randomization considerations for PCIe Integrity and Data Encryption Verification Challenges

Peripheral Component Interconnect Express (PCIe) is a high-speed interface standard widely used for connecting processors, memory, and peripherals. With the increasing reliance on PCIe to handle sensitive data and critical high-speed data transfer, ensuring data integrity and encryption during verification is the most essential goal. As we know, in the field of verification, randomization is a key technique that drives robust PCIe verification. It introduces unpredictability to simulate real-world conditions and uncover hidden bugs from the design. This blog examines the significance of randomization in PCIe IDE verification, focusing on how it ensures data integrity and encryption reliability, while also highlighting the unique challenges it presents. For more relevant details and understanding on PCIe IDE you can refer to Introducing PCIe's Integrity and Data Encryption Feature . The Importance of Data Integrity and Data Encryption in PCIe Devices Data Integrity : Ensures that the transmitted data arrives unchanged from source to destination. Even minor corruption in data packets can compromise system reliability, making integrity a critical aspect of PCIe verification. Data Encryption : Protects sensitive data from unauthorized access during transmission. Encryption in PCIe follows a standard to secure information while operating at high speeds. Maintaining both data integrity and data encryption at PCIe’s high-speed data transfer rate of 64GT/s in PCIe 6.0 and 128GT/s in PCIe 7.0 is essential for all end point devices. However, validating these mechanisms requires comprehensive testing and verification methodologies, which is where randomization plays a very crucial role. You can refer to Why IDE Security Technology for PCIe and CXL? for more details on this. Randomization in PCIe Verification Randomization refers to the generation of test scenarios with unpredictable inputs and conditions to expose corner cases. In PCIe verification, this technique helps us to ensure that all possible behaviors are tested, including rare or unexpected situations that could cause data corruption or encryption failures that may cause serious hindrances later. So, for PCIe IDE verification, we are considering the randomization that helps us verify behavior more efficiently. Randomization for Data Integrity Verification Here are some approaches of randomized verifications that mimic real-world traffic conditions, uncovering subtle integrity issues that might not surface in normal verification methods. 1. Randomized Packet Injection: This technique randomized data packets and injected into the communication stream between devices. Here we Inject random, malformed, or out-of-sequence packets into the PCIe link and mix valid and invalid IDE-encrypted packets to check the system’s ability to detect and reject unauthorized or invalid packets. Checking if encryption/decryption occurs correctly across packets. On verifying, we check if the system logs proper errors or alerts when encountering invalid packets. It ensures coverage of different data paths and robust protocol check. This technique helps assess the resilience of the IDE feature in PCIe in below terms: (i) Data corruption: Detecting if the system can maintain data integrity. (ii) Encryption failures: Testing the robustness of the encryption under random data injection. (iii) Packet ordering errors: Ensuring reordering does not affect data delivery. 2. Random Errors and Fault Injection: It involves simulating random bit flips, PCRC errors, or protocol violations to help validate the robustness of error detection and correction mechanisms of PCIe. These techniques help assess how well the PCIe IDE implementation: (i) Detects and responds to unexpected errors. (ii) Maintains secure communication under stress. (iii) Follows the PCIe error recovery and reporting mechanisms (AER – Advanced Error Reporting). (iv) Ensures encryption and decryption states stay synchronized across endpoints. 3. Traffic Pattern Randomization: Randomizing the sequence, size, and timing of data packets helps test how the device maintains data integrity under heavy, unpredictable traffic loads. Randomization for Data Encryption Verification Encryption adds complexity to verification, as encrypted data streams are not readable for traditional checks. Randomization becomes essential to test how encryption behaves under different scenarios. Randomization in data encryption verification ensures that vulnerabilities, such as key reuse or predictable patterns, are identified and mitigated. 1. Random Encryption Keys and Payloads: Randomly varying keys and payloads help validate the correctness of encryption without hardcoding assumptions. This ensures that encryption logic behaves correctly across all possible inputs. 2. Randomized Initialization Vectors (IVs): Many encryption protocols require a unique IV for each transaction. Randomized IVs ensure that encryption does not repeat patterns. To understand the IDE Key management flow, we can follow the below diagram that illustrates a detailed example key programming flow using the IDE_KM protocol. Figure 1: IDE_KM Example As Figure 1 shows, the functionality of the IDE_KM protocol involves Start of IDE_KM Session, Device Capability Discovery, Key Request from the Host, Key Programming to PCIe Device, and Key Acknowledgment. First, the Host starts the IDE_KM session by detecting the presence of the PCIe devices; if the device supports the IDE protocol, the system continues with the key programming process. Then a query occurs to discover the device’s encryption capabilities; it ensures whether the device supports dynamic key updates or static keys. Then the host sends a request to the Key Management Entity to obtain a key suitable for the devices. Once the key is obtained, the host programs the key into the IDE Controller on the PCIe endpoint. Both the host and the device now share the same key to encrypt and authenticate traffic. The device acknowledges that it has received and successfully installed the encryption key and the acknowledgment message is sent back to the host. Once both the host and the PCIe endpoint are configured with the key, a secure communication channel is established. From this point, all data transmitted over the PCIe link is encrypted to maintain confidentiality and integrity. IDE_KM plays a crucial role in distributing keys in a secure manner and maintaining encryption and integrity for PCIe transactions. This key programming flow ensures that a secure communication channel is established between the host and the PCIe device. Hence, the Randomized key approach ensures that the encryption does not repeat patterns. 3. Randomization PHE: Partial Header Encryption (PHE) is an additional mechanism added to Integrity and Data Encryption (IDE) in PCIe 6.0. PHE validation using a variety of traffic; incorporating randomization in APIs provided for validating PHE feature can add more robust Encryption to the data. Partial Header Encryption in Integrity and Data Encryption for PCIe has more detailed information on this. Figure 2: High-Level Flow for Partial Header Encryption 4. Randomization on IDE Address Association Register values: IDE Address Association Register 1/2/3 are supposed to be configured considering the memory address range of IDE partner ports. The fields of IDE address registers are split multiple values such as Memory Base Lower, Memory Limit Lower, Memory Base Upper, and Memory Limit Upper. IDE implementation can have multiple register blocks considering addresses with 32 or 64, different registers sizes, 0-255 selective streams, 0-15 address blocks, etc. This Randomization verification can help verify all the corner cases. Please refer to Figure 2. Figure 3: IDE Address Association Register 5. Random Faults During Encryption: Injecting random faults (e.g., dropped packets or timing mismatches) ensures the system can handle disruptions and prevent data leakage. Challenges of IDE Randomization and its Solution Randomization introduces a vast number of scenarios, making it computationally intensive to simulate every possibility. Constrained randomization limits random inputs to valid ranges while still covering edge cases. Again, using coverage-driven verification to ensure critical scenarios are tested without excessive redundancy. Verifying encrypted data with random inputs increases complexity. Encryption masks data, making it hard to verify outputs without compromising security. Here we can implement various IDE checks on the IDE callback to analyze encrypted traffic without decrypting it. Randomization can trigger unexpected failures, which are often difficult to reproduce. By using seed-based randomization, a specific seed generates a repeatable random sequence. This helps in reproducing and analyzing the behavior more precisely. Conclusion Randomization is a powerful technique in PCIe verification, ensuring robust validation of both data integrity and data encryption. It helps us to uncover subtle bugs and edge cases that a non-randomized testing might miss. In Cadence PCIe VIP, we support full-fledged IDE Verification with rigorous randomized verification that ensures data integrity. Robust and reliable encryption mechanisms ensure secure and efficient data communication. However, randomization also brings various challenges, and to overcome them we adopt a combination of constrained randomization, seed-based testing, and coverage-driven verification. As PCIe continues to evolve with higher speeds and focuses on high security demands, our Cadence PCIe VIP ensures it is in line with industry demand and verify high-performance systems that safeguard data in real-world environments with excellence. For more information, you can refer to Verification of Integrity and Data Encryption(IDE) for PCIe Devices and Industry's First Adopted VIP for PCIe 7.0 . More Information: For more info on how Cadence PCIe Verification IP and TripleCheck VIP enables users to confidently verify IDE, see our VIP for PCI Express , VIP for Compute Express Link for and TripleCheck for PCI Express For more information on PCIe in general, and on the various PCI standards, see the PCI-SIG website .